Test Report: QEMU_macOS 19319

                    
                      b956d22c0e4b666a5d5401b6edb64a8355930c4b:2024-07-23:35468
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.54
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.11
55 TestCertOptions 10.15
56 TestCertExpiration 195.47
57 TestDockerFlags 12.35
58 TestForceSystemdFlag 11.41
59 TestForceSystemdEnv 9.98
104 TestFunctional/parallel/ServiceCmdConnect 32.64
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103
178 TestMultiControlPlane/serial/RestartSecondaryNode 182.84
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.39
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.15
183 TestMultiControlPlane/serial/StopCluster 202.08
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 10.16
193 TestJSONOutput/start/Command 9.75
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.2
225 TestMountStart/serial/StartWithMountFirst 10.1
228 TestMultiNode/serial/FreshStart2Nodes 9.89
229 TestMultiNode/serial/DeployApp2Nodes 89.41
230 TestMultiNode/serial/PingHostFrom2Pods 0.08
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.08
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.13
236 TestMultiNode/serial/StartAfterStop 35.63
237 TestMultiNode/serial/RestartKeepsNodes 8.33
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.74
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20
245 TestPreload 9.92
247 TestScheduledStopUnix 9.94
248 TestSkaffold 12.58
251 TestRunningBinaryUpgrade 628.47
253 TestKubernetesUpgrade 18.74
267 TestStoppedBinaryUpgrade/Upgrade 584.66
277 TestPause/serial/Start 10.25
280 TestNoKubernetes/serial/StartWithK8s 10.03
281 TestNoKubernetes/serial/StartWithStopK8s 7.75
282 TestNoKubernetes/serial/Start 7.68
283 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.3
284 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.93
288 TestNoKubernetes/serial/StartNoArgs 5.36
290 TestNetworkPlugins/group/auto/Start 9.98
291 TestNetworkPlugins/group/kindnet/Start 9.87
292 TestNetworkPlugins/group/flannel/Start 9.86
293 TestNetworkPlugins/group/enable-default-cni/Start 9.9
294 TestNetworkPlugins/group/bridge/Start 9.83
295 TestNetworkPlugins/group/kubenet/Start 9.94
296 TestNetworkPlugins/group/custom-flannel/Start 9.76
297 TestNetworkPlugins/group/calico/Start 9.82
298 TestNetworkPlugins/group/false/Start 9.91
300 TestStartStop/group/old-k8s-version/serial/FirstStart 10.06
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/no-preload/serial/FirstStart 10.27
312 TestStartStop/group/no-preload/serial/DeployApp 0.09
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/no-preload/serial/SecondStart 5.26
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/no-preload/serial/Pause 0.1
322 TestStartStop/group/embed-certs/serial/FirstStart 9.84
323 TestStartStop/group/embed-certs/serial/DeployApp 0.09
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
327 TestStartStop/group/embed-certs/serial/SecondStart 5.2
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.87
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.11
333 TestStartStop/group/embed-certs/serial/Pause 0.11
335 TestStartStop/group/newest-cni/serial/FirstStart 11.83
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
345 TestStartStop/group/newest-cni/serial/SecondStart 5.25
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (17.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-909000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-909000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.5359885s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f6370b48-1739-4422-85ac-fd6cc6d1daa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-909000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1453a482-b793-4d67-b385-da4235cb50d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19319"}}
	{"specversion":"1.0","id":"3eb2cd69-bd52-44f7-bcdd-1d4e22d3a273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig"}}
	{"specversion":"1.0","id":"b72f3692-e060-4a13-a0fa-222f4ddced31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0dc9a159-6a8c-40cb-b897-e99309e7db83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"18401ee9-b422-47e3-9635-0f803db306f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube"}}
	{"specversion":"1.0","id":"a5ebd227-6cec-4adb-9bae-a64d93e0ffe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"beb48954-ff4c-481e-9fa2-3f588f1d316e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"58219ef3-3ba1-4298-bb6a-11631a59001c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0f84241b-498b-43c8-9ad7-3b108a7c8c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b08199d-cc87-4ae8-b8cf-eb94df3aeb9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-909000\" primary control-plane node in \"download-only-909000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"95c557cd-a19d-4e50-b231-07cb4735391e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"43406792-901f-4a6b-82d0-5659cc247a66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60] Decompressors:map[bz2:0x14000523240 gz:0x14000523248 tar:0x140005231f0 tar.bz2:0x14000523200 tar.gz:0x14000523210 tar.xz:0x14000523220 tar.zst:0x14000523230 tbz2:0x14000523200 tgz:0x14
000523210 txz:0x14000523220 tzst:0x14000523230 xz:0x14000523250 zip:0x14000523260 zst:0x14000523258] Getters:map[file:0x14000b0e910 http:0x140009001e0 https:0x14000900230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"4afa48b4-d6e8-4494-b5a6-bef898806396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 06:55:37.478215    2067 out.go:291] Setting OutFile to fd 1 ...
	I0723 06:55:37.478392    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 06:55:37.478395    2067 out.go:304] Setting ErrFile to fd 2...
	I0723 06:55:37.478397    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 06:55:37.478519    2067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	W0723 06:55:37.478590    2067 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19319-1567/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19319-1567/.minikube/config/config.json: no such file or directory
	I0723 06:55:37.479832    2067 out.go:298] Setting JSON to true
	I0723 06:55:37.496940    2067 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1501,"bootTime":1721741436,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 06:55:37.497003    2067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 06:55:37.501768    2067 out.go:97] [download-only-909000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 06:55:37.501943    2067 notify.go:220] Checking for updates...
	W0723 06:55:37.501999    2067 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball: no such file or directory
	I0723 06:55:37.504704    2067 out.go:169] MINIKUBE_LOCATION=19319
	I0723 06:55:37.507805    2067 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 06:55:37.512726    2067 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 06:55:37.515753    2067 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 06:55:37.518751    2067 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	W0723 06:55:37.524703    2067 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 06:55:37.524896    2067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 06:55:37.528725    2067 out.go:97] Using the qemu2 driver based on user configuration
	I0723 06:55:37.528744    2067 start.go:297] selected driver: qemu2
	I0723 06:55:37.528764    2067 start.go:901] validating driver "qemu2" against <nil>
	I0723 06:55:37.528839    2067 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 06:55:37.531720    2067 out.go:169] Automatically selected the socket_vmnet network
	I0723 06:55:37.537579    2067 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0723 06:55:37.537678    2067 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 06:55:37.537704    2067 cni.go:84] Creating CNI manager for ""
	I0723 06:55:37.537720    2067 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0723 06:55:37.537762    2067 start.go:340] cluster config:
	{Name:download-only-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-909000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 06:55:37.543075    2067 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 06:55:37.546877    2067 out.go:97] Downloading VM boot image ...
	I0723 06:55:37.546902    2067 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0723 06:55:46.569430    2067 out.go:97] Starting "download-only-909000" primary control-plane node in "download-only-909000" cluster
	I0723 06:55:46.569457    2067 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 06:55:46.639595    2067 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0723 06:55:46.639613    2067 cache.go:56] Caching tarball of preloaded images
	I0723 06:55:46.639795    2067 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 06:55:46.647912    2067 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0723 06:55:46.647920    2067 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:55:46.724317    2067 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0723 06:55:53.803533    2067 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:55:53.803707    2067 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:55:54.499049    2067 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0723 06:55:54.499243    2067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/download-only-909000/config.json ...
	I0723 06:55:54.499261    2067 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/download-only-909000/config.json: {Name:mkc0920811cfb85cd807206e046ab53156a5fad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 06:55:54.499496    2067 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 06:55:54.499690    2067 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0723 06:55:54.940396    2067 out.go:169] 
	W0723 06:55:54.945455    2067 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60] Decompressors:map[bz2:0x14000523240 gz:0x14000523248 tar:0x140005231f0 tar.bz2:0x14000523200 tar.gz:0x14000523210 tar.xz:0x14000523220 tar.zst:0x14000523230 tbz2:0x14000523200 tgz:0x14000523210 txz:0x14000523220 tzst:0x14000523230 xz:0x14000523250 zip:0x14000523260 zst:0x14000523258] Getters:map[file:0x14000b0e910 http:0x140009001e0 https:0x14000900230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0723 06:55:54.945477    2067 out_reason.go:110] 
	W0723 06:55:54.954272    2067 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 06:55:54.958418    2067 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-909000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (17.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-252000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-252000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.963456625s)

                                                
                                                
-- stdout --
	* [offline-docker-252000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-252000" primary control-plane node in "offline-docker-252000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-252000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:32:45.716655    4874 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:32:45.716796    4874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:32:45.716799    4874 out.go:304] Setting ErrFile to fd 2...
	I0723 07:32:45.716802    4874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:32:45.716973    4874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:32:45.718251    4874 out.go:298] Setting JSON to false
	I0723 07:32:45.735460    4874 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3729,"bootTime":1721741436,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:32:45.735561    4874 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:32:45.741174    4874 out.go:177] * [offline-docker-252000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:32:45.749219    4874 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:32:45.749239    4874 notify.go:220] Checking for updates...
	I0723 07:32:45.755129    4874 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:32:45.758141    4874 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:32:45.761143    4874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:32:45.764148    4874 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:32:45.767157    4874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:32:45.770493    4874 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:32:45.770545    4874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:32:45.774155    4874 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:32:45.781117    4874 start.go:297] selected driver: qemu2
	I0723 07:32:45.781128    4874 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:32:45.781136    4874 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:32:45.782996    4874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:32:45.786186    4874 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:32:45.789208    4874 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:32:45.789253    4874 cni.go:84] Creating CNI manager for ""
	I0723 07:32:45.789261    4874 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:32:45.789265    4874 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:32:45.789305    4874 start.go:340] cluster config:
	{Name:offline-docker-252000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:32:45.792931    4874 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:45.800149    4874 out.go:177] * Starting "offline-docker-252000" primary control-plane node in "offline-docker-252000" cluster
	I0723 07:32:45.804149    4874 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:32:45.804180    4874 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:32:45.804190    4874 cache.go:56] Caching tarball of preloaded images
	I0723 07:32:45.804264    4874 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:32:45.804269    4874 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:32:45.804341    4874 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/offline-docker-252000/config.json ...
	I0723 07:32:45.804351    4874 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/offline-docker-252000/config.json: {Name:mk2a26a658acabdf71945ab4eadf0e16f074ee7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:32:45.804699    4874 start.go:360] acquireMachinesLock for offline-docker-252000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:32:45.804732    4874 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "offline-docker-252000"
	I0723 07:32:45.804743    4874 start.go:93] Provisioning new machine with config: &{Name:offline-docker-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:32:45.804811    4874 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:32:45.809260    4874 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:32:45.824867    4874 start.go:159] libmachine.API.Create for "offline-docker-252000" (driver="qemu2")
	I0723 07:32:45.824896    4874 client.go:168] LocalClient.Create starting
	I0723 07:32:45.824971    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:32:45.825002    4874 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:45.825012    4874 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:45.825056    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:32:45.825082    4874 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:45.825090    4874 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:45.825491    4874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:32:45.974145    4874 main.go:141] libmachine: Creating SSH key...
	I0723 07:32:46.169521    4874 main.go:141] libmachine: Creating Disk image...
	I0723 07:32:46.169536    4874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:32:46.172326    4874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2
	I0723 07:32:46.181869    4874 main.go:141] libmachine: STDOUT: 
	I0723 07:32:46.181899    4874 main.go:141] libmachine: STDERR: 
	I0723 07:32:46.181962    4874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2 +20000M
	I0723 07:32:46.190806    4874 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:32:46.190827    4874 main.go:141] libmachine: STDERR: 
	I0723 07:32:46.190846    4874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2
	I0723 07:32:46.190849    4874 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:32:46.190862    4874 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:32:46.190893    4874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c8:9f:62:de:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2
	I0723 07:32:46.193028    4874 main.go:141] libmachine: STDOUT: 
	I0723 07:32:46.193051    4874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:32:46.193074    4874 client.go:171] duration metric: took 368.176791ms to LocalClient.Create
	I0723 07:32:48.195221    4874 start.go:128] duration metric: took 2.390430417s to createHost
	I0723 07:32:48.195290    4874 start.go:83] releasing machines lock for "offline-docker-252000", held for 2.390593041s
	W0723 07:32:48.195327    4874 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:48.211072    4874 out.go:177] * Deleting "offline-docker-252000" in qemu2 ...
	W0723 07:32:48.232254    4874 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:48.232275    4874 start.go:729] Will try again in 5 seconds ...
	I0723 07:32:53.234342    4874 start.go:360] acquireMachinesLock for offline-docker-252000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:32:53.234782    4874 start.go:364] duration metric: took 344.833µs to acquireMachinesLock for "offline-docker-252000"
	I0723 07:32:53.234927    4874 start.go:93] Provisioning new machine with config: &{Name:offline-docker-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:32:53.235173    4874 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:32:53.254682    4874 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:32:53.306371    4874 start.go:159] libmachine.API.Create for "offline-docker-252000" (driver="qemu2")
	I0723 07:32:53.306423    4874 client.go:168] LocalClient.Create starting
	I0723 07:32:53.306538    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:32:53.306620    4874 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:53.306639    4874 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:53.306696    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:32:53.306749    4874 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:53.306765    4874 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:53.307413    4874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:32:53.465984    4874 main.go:141] libmachine: Creating SSH key...
	I0723 07:32:53.584390    4874 main.go:141] libmachine: Creating Disk image...
	I0723 07:32:53.584395    4874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:32:53.584590    4874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2
	I0723 07:32:53.593823    4874 main.go:141] libmachine: STDOUT: 
	I0723 07:32:53.593841    4874 main.go:141] libmachine: STDERR: 
	I0723 07:32:53.593901    4874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2 +20000M
	I0723 07:32:53.601634    4874 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:32:53.601646    4874 main.go:141] libmachine: STDERR: 
	I0723 07:32:53.601664    4874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2
	I0723 07:32:53.601668    4874 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:32:53.601680    4874 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:32:53.601712    4874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4b:b4:13:0e:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/offline-docker-252000/disk.qcow2
	I0723 07:32:53.603271    4874 main.go:141] libmachine: STDOUT: 
	I0723 07:32:53.603284    4874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:32:53.603297    4874 client.go:171] duration metric: took 296.874333ms to LocalClient.Create
	I0723 07:32:55.605431    4874 start.go:128] duration metric: took 2.370272084s to createHost
	I0723 07:32:55.605554    4874 start.go:83] releasing machines lock for "offline-docker-252000", held for 2.370788875s
	W0723 07:32:55.605838    4874 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-252000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-252000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:55.621209    4874 out.go:177] 
	W0723 07:32:55.626484    4874 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:32:55.626526    4874 out.go:239] * 
	* 
	W0723 07:32:55.628879    4874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:32:55.640353    4874 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-252000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-23 07:32:55.651209 -0700 PDT m=+2238.296993084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-252000 -n offline-docker-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-252000 -n offline-docker-252000: exit status 7 (49.751625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-252000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-252000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-252000
--- FAIL: TestOffline (10.11s)

                                                
                                    
x
+
TestCertOptions (10.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-752000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E0723 07:44:24.315059    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-752000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.888083542s)

                                                
                                                
-- stdout --
	* [cert-options-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-752000" primary control-plane node in "cert-options-752000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-752000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-752000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-752000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-752000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-752000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.157959ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-752000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-752000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-752000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-752000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-752000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-752000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (38.939917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-752000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-752000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-752000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-752000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-752000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-23 07:44:31.417638 -0700 PDT m=+2934.084537417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-752000 -n cert-options-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-752000 -n cert-options-752000: exit status 7 (28.76125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-752000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-752000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-752000
--- FAIL: TestCertOptions (10.15s)

                                                
                                    
x
+
TestCertExpiration (195.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-144000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-144000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.067759584s)

                                                
                                                
-- stdout --
	* [cert-expiration-144000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-144000" primary control-plane node in "cert-expiration-144000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-144000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-144000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-144000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-144000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-144000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.244954291s)

                                                
                                                
-- stdout --
	* [cert-expiration-144000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-144000" primary control-plane node in "cert-expiration-144000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-144000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-144000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-144000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-144000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-144000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-144000" primary control-plane node in "cert-expiration-144000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-144000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-144000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-144000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-23 07:47:16.628487 -0700 PDT m=+3099.298704751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-144000 -n cert-expiration-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-144000 -n cert-expiration-144000: exit status 7 (66.462583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-144000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-144000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-144000
--- FAIL: TestCertExpiration (195.47s)

                                                
                                    
x
+
TestDockerFlags (12.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-482000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-482000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.110750375s)

                                                
                                                
-- stdout --
	* [docker-flags-482000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-482000" primary control-plane node in "docker-flags-482000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-482000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:44:09.058777    5499 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:44:09.058953    5499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:44:09.058957    5499 out.go:304] Setting ErrFile to fd 2...
	I0723 07:44:09.058960    5499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:44:09.059114    5499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:44:09.060507    5499 out.go:298] Setting JSON to false
	I0723 07:44:09.079867    5499 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4413,"bootTime":1721741436,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:44:09.079953    5499 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:44:09.089414    5499 out.go:177] * [docker-flags-482000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:44:09.099506    5499 notify.go:220] Checking for updates...
	I0723 07:44:09.105288    5499 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:44:09.112369    5499 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:44:09.121356    5499 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:44:09.128355    5499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:44:09.134274    5499 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:44:09.141389    5499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:44:09.145755    5499 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:44:09.145833    5499 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:44:09.145879    5499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:44:09.149352    5499 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:44:09.158337    5499 start.go:297] selected driver: qemu2
	I0723 07:44:09.158344    5499 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:44:09.158350    5499 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:44:09.161173    5499 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:44:09.173327    5499 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:44:09.176436    5499 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0723 07:44:09.176465    5499 cni.go:84] Creating CNI manager for ""
	I0723 07:44:09.176472    5499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:44:09.176475    5499 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:44:09.176504    5499 start.go:340] cluster config:
	{Name:docker-flags-482000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-482000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:44:09.180050    5499 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:44:09.187308    5499 out.go:177] * Starting "docker-flags-482000" primary control-plane node in "docker-flags-482000" cluster
	I0723 07:44:09.191212    5499 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:44:09.191223    5499 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:44:09.191229    5499 cache.go:56] Caching tarball of preloaded images
	I0723 07:44:09.191273    5499 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:44:09.191279    5499 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:44:09.191337    5499 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/docker-flags-482000/config.json ...
	I0723 07:44:09.191346    5499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/docker-flags-482000/config.json: {Name:mkb7a1e5a901223f933177415bb57c43ea9c38c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:44:09.191589    5499 start.go:360] acquireMachinesLock for docker-flags-482000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:44:11.344752    5499 start.go:364] duration metric: took 2.153177125s to acquireMachinesLock for "docker-flags-482000"
	I0723 07:44:11.345042    5499 start.go:93] Provisioning new machine with config: &{Name:docker-flags-482000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-482000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:44:11.345271    5499 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:44:11.354665    5499 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:44:11.403697    5499 start.go:159] libmachine.API.Create for "docker-flags-482000" (driver="qemu2")
	I0723 07:44:11.403763    5499 client.go:168] LocalClient.Create starting
	I0723 07:44:11.403920    5499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:44:11.403974    5499 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:11.403997    5499 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:11.404078    5499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:44:11.404123    5499 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:11.404138    5499 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:11.404791    5499 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:44:11.569419    5499 main.go:141] libmachine: Creating SSH key...
	I0723 07:44:11.724048    5499 main.go:141] libmachine: Creating Disk image...
	I0723 07:44:11.724061    5499 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:44:11.724245    5499 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2
	I0723 07:44:11.733622    5499 main.go:141] libmachine: STDOUT: 
	I0723 07:44:11.733639    5499 main.go:141] libmachine: STDERR: 
	I0723 07:44:11.733683    5499 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2 +20000M
	I0723 07:44:11.741462    5499 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:44:11.741475    5499 main.go:141] libmachine: STDERR: 
	I0723 07:44:11.741487    5499 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2
	I0723 07:44:11.741491    5499 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:44:11.741504    5499 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:44:11.741534    5499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:c3:a0:4e:e3:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2
	I0723 07:44:11.743207    5499 main.go:141] libmachine: STDOUT: 
	I0723 07:44:11.743222    5499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:44:11.743242    5499 client.go:171] duration metric: took 339.479834ms to LocalClient.Create
	I0723 07:44:13.745374    5499 start.go:128] duration metric: took 2.400117084s to createHost
	I0723 07:44:13.745449    5499 start.go:83] releasing machines lock for "docker-flags-482000", held for 2.400705041s
	W0723 07:44:13.745532    5499 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:13.756858    5499 out.go:177] * Deleting "docker-flags-482000" in qemu2 ...
	W0723 07:44:13.788978    5499 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:13.789000    5499 start.go:729] Will try again in 5 seconds ...
	I0723 07:44:18.791129    5499 start.go:360] acquireMachinesLock for docker-flags-482000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:44:18.791732    5499 start.go:364] duration metric: took 472.875µs to acquireMachinesLock for "docker-flags-482000"
	I0723 07:44:18.791878    5499 start.go:93] Provisioning new machine with config: &{Name:docker-flags-482000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-482000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:44:18.792167    5499 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:44:18.795895    5499 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:44:18.846352    5499 start.go:159] libmachine.API.Create for "docker-flags-482000" (driver="qemu2")
	I0723 07:44:18.846414    5499 client.go:168] LocalClient.Create starting
	I0723 07:44:18.846526    5499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:44:18.846591    5499 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:18.846607    5499 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:18.846674    5499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:44:18.846718    5499 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:18.846733    5499 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:18.847503    5499 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:44:19.013033    5499 main.go:141] libmachine: Creating SSH key...
	I0723 07:44:19.078114    5499 main.go:141] libmachine: Creating Disk image...
	I0723 07:44:19.078122    5499 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:44:19.078299    5499 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2
	I0723 07:44:19.087341    5499 main.go:141] libmachine: STDOUT: 
	I0723 07:44:19.087370    5499 main.go:141] libmachine: STDERR: 
	I0723 07:44:19.087413    5499 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2 +20000M
	I0723 07:44:19.095298    5499 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:44:19.095314    5499 main.go:141] libmachine: STDERR: 
	I0723 07:44:19.095324    5499 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2
	I0723 07:44:19.095329    5499 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:44:19.095337    5499 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:44:19.095378    5499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:52:cf:63:be:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/docker-flags-482000/disk.qcow2
	I0723 07:44:19.097020    5499 main.go:141] libmachine: STDOUT: 
	I0723 07:44:19.097034    5499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:44:19.097045    5499 client.go:171] duration metric: took 250.631ms to LocalClient.Create
	I0723 07:44:21.099186    5499 start.go:128] duration metric: took 2.307036958s to createHost
	I0723 07:44:21.099265    5499 start.go:83] releasing machines lock for "docker-flags-482000", held for 2.307554208s
	W0723 07:44:21.099723    5499 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-482000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-482000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:21.110771    5499 out.go:177] 
	W0723 07:44:21.114456    5499 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:44:21.114495    5499 out.go:239] * 
	* 
	W0723 07:44:21.116973    5499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:44:21.125444    5499 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-482000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-482000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-482000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.599917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-482000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-482000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-482000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-482000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-482000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-482000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-482000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-482000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-482000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.562083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-482000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-482000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-482000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-482000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-482000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-482000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-23 07:44:21.266562 -0700 PDT m=+2923.933258001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-482000 -n docker-flags-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-482000 -n docker-flags-482000: exit status 7 (29.049917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-482000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-482000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-482000
--- FAIL: TestDockerFlags (12.35s)

                                                
                                    
x
+
TestForceSystemdFlag (11.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-612000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-612000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.224469583s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-612000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-612000" primary control-plane node in "force-systemd-flag-612000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-612000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:43:35.152306    5350 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:43:35.152442    5350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:43:35.152445    5350 out.go:304] Setting ErrFile to fd 2...
	I0723 07:43:35.152447    5350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:43:35.152573    5350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:43:35.153599    5350 out.go:298] Setting JSON to false
	I0723 07:43:35.169490    5350 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4379,"bootTime":1721741436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:43:35.169563    5350 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:43:35.174575    5350 out.go:177] * [force-systemd-flag-612000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:43:35.181549    5350 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:43:35.181592    5350 notify.go:220] Checking for updates...
	I0723 07:43:35.188528    5350 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:43:35.191548    5350 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:43:35.194556    5350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:43:35.197509    5350 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:43:35.200554    5350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:43:35.203820    5350 config.go:182] Loaded profile config "NoKubernetes-361000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:43:35.203892    5350 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:43:35.203946    5350 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:43:35.207510    5350 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:43:35.214517    5350 start.go:297] selected driver: qemu2
	I0723 07:43:35.214523    5350 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:43:35.214529    5350 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:43:35.216711    5350 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:43:35.219483    5350 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:43:35.222669    5350 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 07:43:35.222709    5350 cni.go:84] Creating CNI manager for ""
	I0723 07:43:35.222716    5350 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:43:35.222721    5350 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:43:35.222754    5350 start.go:340] cluster config:
	{Name:force-systemd-flag-612000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:43:35.226512    5350 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:43:35.234553    5350 out.go:177] * Starting "force-systemd-flag-612000" primary control-plane node in "force-systemd-flag-612000" cluster
	I0723 07:43:35.238400    5350 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:43:35.238416    5350 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:43:35.238430    5350 cache.go:56] Caching tarball of preloaded images
	I0723 07:43:35.238490    5350 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:43:35.238496    5350 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:43:35.238557    5350 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/force-systemd-flag-612000/config.json ...
	I0723 07:43:35.238570    5350 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/force-systemd-flag-612000/config.json: {Name:mk05beab5673ef3ea0c771d5808ab6cab3116c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:43:35.239084    5350 start.go:360] acquireMachinesLock for force-systemd-flag-612000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:43:35.937977    5350 start.go:364] duration metric: took 698.853958ms to acquireMachinesLock for "force-systemd-flag-612000"
	I0723 07:43:35.938149    5350 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-612000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:43:35.938377    5350 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:43:35.946968    5350 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:43:35.998223    5350 start.go:159] libmachine.API.Create for "force-systemd-flag-612000" (driver="qemu2")
	I0723 07:43:35.998276    5350 client.go:168] LocalClient.Create starting
	I0723 07:43:35.998448    5350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:43:35.998513    5350 main.go:141] libmachine: Decoding PEM data...
	I0723 07:43:35.998528    5350 main.go:141] libmachine: Parsing certificate...
	I0723 07:43:35.998603    5350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:43:35.998648    5350 main.go:141] libmachine: Decoding PEM data...
	I0723 07:43:35.998670    5350 main.go:141] libmachine: Parsing certificate...
	I0723 07:43:35.999323    5350 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:43:36.521523    5350 main.go:141] libmachine: Creating SSH key...
	I0723 07:43:36.608991    5350 main.go:141] libmachine: Creating Disk image...
	I0723 07:43:36.608996    5350 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:43:36.609176    5350 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0723 07:43:36.618582    5350 main.go:141] libmachine: STDOUT: 
	I0723 07:43:36.618597    5350 main.go:141] libmachine: STDERR: 
	I0723 07:43:36.618646    5350 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2 +20000M
	I0723 07:43:36.626871    5350 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:43:36.626891    5350 main.go:141] libmachine: STDERR: 
	I0723 07:43:36.626907    5350 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0723 07:43:36.626912    5350 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:43:36.626927    5350 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:43:36.626958    5350 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a7:46:dd:70:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0723 07:43:36.628652    5350 main.go:141] libmachine: STDOUT: 
	I0723 07:43:36.628665    5350 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:43:36.628681    5350 client.go:171] duration metric: took 630.397959ms to LocalClient.Create
	I0723 07:43:38.630808    5350 start.go:128] duration metric: took 2.69245275s to createHost
	I0723 07:43:38.630861    5350 start.go:83] releasing machines lock for "force-systemd-flag-612000", held for 2.692888417s
	W0723 07:43:38.630939    5350 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:43:38.644480    5350 out.go:177] * Deleting "force-systemd-flag-612000" in qemu2 ...
	W0723 07:43:38.681740    5350 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:43:38.681765    5350 start.go:729] Will try again in 5 seconds ...
	I0723 07:43:43.683852    5350 start.go:360] acquireMachinesLock for force-systemd-flag-612000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:43:43.689378    5350 start.go:364] duration metric: took 5.307084ms to acquireMachinesLock for "force-systemd-flag-612000"
	I0723 07:43:43.689429    5350 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-612000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:43:43.689618    5350 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:43:43.697282    5350 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:43:43.744551    5350 start.go:159] libmachine.API.Create for "force-systemd-flag-612000" (driver="qemu2")
	I0723 07:43:43.744600    5350 client.go:168] LocalClient.Create starting
	I0723 07:43:43.744700    5350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:43:43.744782    5350 main.go:141] libmachine: Decoding PEM data...
	I0723 07:43:43.744799    5350 main.go:141] libmachine: Parsing certificate...
	I0723 07:43:43.744860    5350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:43:43.744905    5350 main.go:141] libmachine: Decoding PEM data...
	I0723 07:43:43.744924    5350 main.go:141] libmachine: Parsing certificate...
	I0723 07:43:43.745401    5350 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:43:44.073245    5350 main.go:141] libmachine: Creating SSH key...
	I0723 07:43:44.274707    5350 main.go:141] libmachine: Creating Disk image...
	I0723 07:43:44.274722    5350 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:43:44.274916    5350 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0723 07:43:44.284488    5350 main.go:141] libmachine: STDOUT: 
	I0723 07:43:44.284516    5350 main.go:141] libmachine: STDERR: 
	I0723 07:43:44.284569    5350 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2 +20000M
	I0723 07:43:44.292451    5350 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:43:44.292464    5350 main.go:141] libmachine: STDERR: 
	I0723 07:43:44.292479    5350 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0723 07:43:44.292486    5350 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:43:44.292497    5350 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:43:44.292529    5350 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:60:b3:82:42:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-flag-612000/disk.qcow2
	I0723 07:43:44.294229    5350 main.go:141] libmachine: STDOUT: 
	I0723 07:43:44.294251    5350 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:43:44.294267    5350 client.go:171] duration metric: took 549.674208ms to LocalClient.Create
	I0723 07:43:46.296424    5350 start.go:128] duration metric: took 2.60683325s to createHost
	I0723 07:43:46.296506    5350 start.go:83] releasing machines lock for "force-systemd-flag-612000", held for 2.607125333s
	W0723 07:43:46.296820    5350 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:43:46.306451    5350 out.go:177] 
	W0723 07:43:46.318481    5350 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:43:46.318506    5350 out.go:239] * 
	* 
	W0723 07:43:46.321181    5350 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:43:46.332456    5350 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-612000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-612000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-612000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (73.389125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-612000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-612000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-612000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-23 07:43:46.425455 -0700 PDT m=+2889.091451292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-612000 -n force-systemd-flag-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-612000 -n force-systemd-flag-612000: exit status 7 (33.5895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-612000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-612000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-612000
--- FAIL: TestForceSystemdFlag (11.41s)

                                                
                                    
x
+
TestForceSystemdEnv (9.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-646000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-646000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.770122709s)

                                                
                                                
-- stdout --
	* [force-systemd-env-646000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-646000" primary control-plane node in "force-systemd-env-646000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-646000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:43:59.078159    5458 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:43:59.078285    5458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:43:59.078294    5458 out.go:304] Setting ErrFile to fd 2...
	I0723 07:43:59.078297    5458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:43:59.078418    5458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:43:59.079428    5458 out.go:298] Setting JSON to false
	I0723 07:43:59.095134    5458 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4403,"bootTime":1721741436,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:43:59.095200    5458 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:43:59.100817    5458 out.go:177] * [force-systemd-env-646000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:43:59.108834    5458 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:43:59.108891    5458 notify.go:220] Checking for updates...
	I0723 07:43:59.116818    5458 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:43:59.119817    5458 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:43:59.122806    5458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:43:59.125800    5458 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:43:59.128822    5458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0723 07:43:59.132106    5458 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:43:59.132154    5458 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:43:59.136787    5458 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:43:59.143717    5458 start.go:297] selected driver: qemu2
	I0723 07:43:59.143723    5458 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:43:59.143730    5458 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:43:59.146123    5458 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:43:59.148794    5458 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:43:59.151893    5458 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 07:43:59.151909    5458 cni.go:84] Creating CNI manager for ""
	I0723 07:43:59.151922    5458 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:43:59.151926    5458 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:43:59.151961    5458 start.go:340] cluster config:
	{Name:force-systemd-env-646000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:43:59.155660    5458 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:43:59.162857    5458 out.go:177] * Starting "force-systemd-env-646000" primary control-plane node in "force-systemd-env-646000" cluster
	I0723 07:43:59.166647    5458 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:43:59.166666    5458 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:43:59.166682    5458 cache.go:56] Caching tarball of preloaded images
	I0723 07:43:59.166753    5458 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:43:59.166759    5458 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:43:59.166819    5458 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/force-systemd-env-646000/config.json ...
	I0723 07:43:59.166833    5458 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/force-systemd-env-646000/config.json: {Name:mk7aacd4e6def54df6b8a8f11dd888b3164904ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:43:59.167173    5458 start.go:360] acquireMachinesLock for force-systemd-env-646000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:43:59.167213    5458 start.go:364] duration metric: took 29.709µs to acquireMachinesLock for "force-systemd-env-646000"
	I0723 07:43:59.167225    5458 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:43:59.167252    5458 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:43:59.174792    5458 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:43:59.192853    5458 start.go:159] libmachine.API.Create for "force-systemd-env-646000" (driver="qemu2")
	I0723 07:43:59.192887    5458 client.go:168] LocalClient.Create starting
	I0723 07:43:59.192964    5458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:43:59.192996    5458 main.go:141] libmachine: Decoding PEM data...
	I0723 07:43:59.193005    5458 main.go:141] libmachine: Parsing certificate...
	I0723 07:43:59.193046    5458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:43:59.193070    5458 main.go:141] libmachine: Decoding PEM data...
	I0723 07:43:59.193078    5458 main.go:141] libmachine: Parsing certificate...
	I0723 07:43:59.193444    5458 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:43:59.348077    5458 main.go:141] libmachine: Creating SSH key...
	I0723 07:43:59.431814    5458 main.go:141] libmachine: Creating Disk image...
	I0723 07:43:59.431819    5458 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:43:59.432013    5458 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2
	I0723 07:43:59.441124    5458 main.go:141] libmachine: STDOUT: 
	I0723 07:43:59.441150    5458 main.go:141] libmachine: STDERR: 
	I0723 07:43:59.441197    5458 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2 +20000M
	I0723 07:43:59.449056    5458 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:43:59.449073    5458 main.go:141] libmachine: STDERR: 
	I0723 07:43:59.449083    5458 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2
	I0723 07:43:59.449088    5458 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:43:59.449098    5458 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:43:59.449132    5458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:2c:2a:03:19:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2
	I0723 07:43:59.450792    5458 main.go:141] libmachine: STDOUT: 
	I0723 07:43:59.450876    5458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:43:59.450898    5458 client.go:171] duration metric: took 258.010917ms to LocalClient.Create
	I0723 07:44:01.452986    5458 start.go:128] duration metric: took 2.28577275s to createHost
	I0723 07:44:01.453001    5458 start.go:83] releasing machines lock for "force-systemd-env-646000", held for 2.285828875s
	W0723 07:44:01.453019    5458 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:01.467968    5458 out.go:177] * Deleting "force-systemd-env-646000" in qemu2 ...
	W0723 07:44:01.478778    5458 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:01.478788    5458 start.go:729] Will try again in 5 seconds ...
	I0723 07:44:06.480856    5458 start.go:360] acquireMachinesLock for force-systemd-env-646000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:44:06.481286    5458 start.go:364] duration metric: took 339.25µs to acquireMachinesLock for "force-systemd-env-646000"
	I0723 07:44:06.481435    5458 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:44:06.481730    5458 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:44:06.490152    5458 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:44:06.540137    5458 start.go:159] libmachine.API.Create for "force-systemd-env-646000" (driver="qemu2")
	I0723 07:44:06.540193    5458 client.go:168] LocalClient.Create starting
	I0723 07:44:06.540306    5458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:44:06.540366    5458 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:06.540385    5458 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:06.540451    5458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:44:06.540505    5458 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:06.540518    5458 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:06.541141    5458 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:44:06.703810    5458 main.go:141] libmachine: Creating SSH key...
	I0723 07:44:06.759222    5458 main.go:141] libmachine: Creating Disk image...
	I0723 07:44:06.759227    5458 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:44:06.759399    5458 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2
	I0723 07:44:06.768768    5458 main.go:141] libmachine: STDOUT: 
	I0723 07:44:06.768790    5458 main.go:141] libmachine: STDERR: 
	I0723 07:44:06.768861    5458 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2 +20000M
	I0723 07:44:06.776747    5458 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:44:06.776760    5458 main.go:141] libmachine: STDERR: 
	I0723 07:44:06.776772    5458 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2
	I0723 07:44:06.776782    5458 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:44:06.776791    5458 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:44:06.776819    5458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c1:3f:cd:eb:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/force-systemd-env-646000/disk.qcow2
	I0723 07:44:06.778448    5458 main.go:141] libmachine: STDOUT: 
	I0723 07:44:06.778463    5458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:44:06.778475    5458 client.go:171] duration metric: took 238.280292ms to LocalClient.Create
	I0723 07:44:08.780606    5458 start.go:128] duration metric: took 2.298882666s to createHost
	I0723 07:44:08.780660    5458 start.go:83] releasing machines lock for "force-systemd-env-646000", held for 2.299375667s
	W0723 07:44:08.781054    5458 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:08.795714    5458 out.go:177] 
	W0723 07:44:08.798885    5458 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:44:08.798910    5458 out.go:239] * 
	* 
	W0723 07:44:08.801646    5458 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:44:08.806728    5458 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-646000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-646000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-646000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.527125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-646000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-646000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-646000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-23 07:44:08.899205 -0700 PDT m=+2911.565652292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-646000 -n force-systemd-env-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-646000 -n force-systemd-env-646000: exit status 7 (32.345125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-646000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-646000
--- FAIL: TestForceSystemdEnv (9.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-693000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-693000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-4tqb7" [7e6e0d75-942f-4698-93e3-d211fe701486] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-4tqb7" [7e6e0d75-942f-4698-93e3-d211fe701486] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004078917s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30803
functional_test.go:1657: error fetching http://192.168.105.4:30803: Get "http://192.168.105.4:30803": dial tcp 192.168.105.4:30803: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30803: Get "http://192.168.105.4:30803": dial tcp 192.168.105.4:30803: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30803: Get "http://192.168.105.4:30803": dial tcp 192.168.105.4:30803: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30803: Get "http://192.168.105.4:30803": dial tcp 192.168.105.4:30803: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30803: Get "http://192.168.105.4:30803": dial tcp 192.168.105.4:30803: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30803: Get "http://192.168.105.4:30803": dial tcp 192.168.105.4:30803: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30803: Get "http://192.168.105.4:30803": dial tcp 192.168.105.4:30803: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30803: Get "http://192.168.105.4:30803": dial tcp 192.168.105.4:30803: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-693000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-4tqb7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-693000/192.168.105.4
Start Time:       Tue, 23 Jul 2024 07:06:34 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://79a913ed9420058d7bfdcfa35adca38bf81058133d221082c17a6ab656b10449
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 23 Jul 2024 07:06:47 -0700
Finished:     Tue, 23 Jul 2024 07:06:47 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2z888 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2z888:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-4tqb7 to functional-693000
Normal   Pulled     19s (x3 over 31s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    19s (x3 over 31s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 31s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-4tqb7_default(7e6e0d75-942f-4698-93e3-d211fe701486)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-693000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-693000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.243.82
IPs:                      10.100.243.82
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30803/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-693000 -n functional-693000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-693000 service list                                                                                       | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:06 PDT | 23 Jul 24 07:06 PDT |
	| service | functional-693000 service list                                                                                       | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:06 PDT | 23 Jul 24 07:06 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-693000 service                                                                                            | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:06 PDT | 23 Jul 24 07:06 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-693000                                                                                                    | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:06 PDT | 23 Jul 24 07:06 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-693000 service                                                                                            | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:06 PDT | 23 Jul 24 07:06 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| addons  | functional-693000 addons list                                                                                        | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:06 PDT | 23 Jul 24 07:06 PDT |
	| addons  | functional-693000 addons list                                                                                        | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:06 PDT | 23 Jul 24 07:06 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-693000 service                                                                                            | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:06 PDT | 23 Jul 24 07:06 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh findmnt                                                                                        | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-693000                                                                                                 | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1340042113/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh findmnt                                                                                        | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT | 23 Jul 24 07:07 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh -- ls                                                                                          | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT | 23 Jul 24 07:07 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh cat                                                                                            | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT | 23 Jul 24 07:07 PDT |
	|         | /mount-9p/test-1721743620821254000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh stat                                                                                           | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT | 23 Jul 24 07:07 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh stat                                                                                           | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT | 23 Jul 24 07:07 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh sudo                                                                                           | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT | 23 Jul 24 07:07 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh findmnt                                                                                        | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-693000                                                                                                 | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3659186591/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh findmnt                                                                                        | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT | 23 Jul 24 07:07 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh -- ls                                                                                          | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT | 23 Jul 24 07:07 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh sudo                                                                                           | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-693000                                                                                                 | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-693000                                                                                                 | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-693000 ssh findmnt                                                                                        | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount   | -p functional-693000                                                                                                 | functional-693000 | jenkins | v1.33.1 | 23 Jul 24 07:07 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 07:05:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 07:05:09.170511    2915 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:05:09.170642    2915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:05:09.170644    2915 out.go:304] Setting ErrFile to fd 2...
	I0723 07:05:09.170646    2915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:05:09.170776    2915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:05:09.171862    2915 out.go:298] Setting JSON to false
	I0723 07:05:09.188247    2915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2073,"bootTime":1721741436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:05:09.188317    2915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:05:09.193029    2915 out.go:177] * [functional-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:05:09.201981    2915 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:05:09.202006    2915 notify.go:220] Checking for updates...
	I0723 07:05:09.208897    2915 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:05:09.211958    2915 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:05:09.214938    2915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:05:09.217918    2915 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:05:09.220916    2915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:05:09.224211    2915 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:05:09.224255    2915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:05:09.228921    2915 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:05:09.235948    2915 start.go:297] selected driver: qemu2
	I0723 07:05:09.235953    2915 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:05:09.236004    2915 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:05:09.238099    2915 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:05:09.238135    2915 cni.go:84] Creating CNI manager for ""
	I0723 07:05:09.238141    2915 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:05:09.238179    2915 start.go:340] cluster config:
	{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-693000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:05:09.241411    2915 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:05:09.248910    2915 out.go:177] * Starting "functional-693000" primary control-plane node in "functional-693000" cluster
	I0723 07:05:09.251883    2915 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:05:09.251894    2915 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:05:09.251903    2915 cache.go:56] Caching tarball of preloaded images
	I0723 07:05:09.251950    2915 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:05:09.251954    2915 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:05:09.251994    2915 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/config.json ...
	I0723 07:05:09.252365    2915 start.go:360] acquireMachinesLock for functional-693000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:05:09.252395    2915 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "functional-693000"
	I0723 07:05:09.252402    2915 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:05:09.252406    2915 fix.go:54] fixHost starting: 
	I0723 07:05:09.253007    2915 fix.go:112] recreateIfNeeded on functional-693000: state=Running err=<nil>
	W0723 07:05:09.253013    2915 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:05:09.256919    2915 out.go:177] * Updating the running qemu2 "functional-693000" VM ...
	I0723 07:05:09.263911    2915 machine.go:94] provisionDockerMachine start ...
	I0723 07:05:09.263937    2915 main.go:141] libmachine: Using SSH client type: native
	I0723 07:05:09.264047    2915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc6a10] 0x102cc9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0723 07:05:09.264049    2915 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 07:05:09.316820    2915 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-693000
	
	I0723 07:05:09.316828    2915 buildroot.go:166] provisioning hostname "functional-693000"
	I0723 07:05:09.316861    2915 main.go:141] libmachine: Using SSH client type: native
	I0723 07:05:09.316969    2915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc6a10] 0x102cc9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0723 07:05:09.316972    2915 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-693000 && echo "functional-693000" | sudo tee /etc/hostname
	I0723 07:05:09.374269    2915 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-693000
	
	I0723 07:05:09.374314    2915 main.go:141] libmachine: Using SSH client type: native
	I0723 07:05:09.374435    2915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc6a10] 0x102cc9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0723 07:05:09.374441    2915 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-693000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-693000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-693000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 07:05:09.425961    2915 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 07:05:09.425970    2915 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19319-1567/.minikube CaCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19319-1567/.minikube}
	I0723 07:05:09.425981    2915 buildroot.go:174] setting up certificates
	I0723 07:05:09.425984    2915 provision.go:84] configureAuth start
	I0723 07:05:09.425992    2915 provision.go:143] copyHostCerts
	I0723 07:05:09.426060    2915 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem, removing ...
	I0723 07:05:09.426064    2915 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem
	I0723 07:05:09.426297    2915 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem (1123 bytes)
	I0723 07:05:09.426480    2915 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem, removing ...
	I0723 07:05:09.426482    2915 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem
	I0723 07:05:09.426538    2915 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem (1679 bytes)
	I0723 07:05:09.426672    2915 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem, removing ...
	I0723 07:05:09.426674    2915 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem
	I0723 07:05:09.426722    2915 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem (1078 bytes)
	I0723 07:05:09.426817    2915 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem org=jenkins.functional-693000 san=[127.0.0.1 192.168.105.4 functional-693000 localhost minikube]
	I0723 07:05:09.604953    2915 provision.go:177] copyRemoteCerts
	I0723 07:05:09.605038    2915 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 07:05:09.605046    2915 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
	I0723 07:05:09.634487    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0723 07:05:09.643587    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 07:05:09.652184    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 07:05:09.660262    2915 provision.go:87] duration metric: took 234.2775ms to configureAuth
	I0723 07:05:09.660268    2915 buildroot.go:189] setting minikube options for container-runtime
	I0723 07:05:09.660383    2915 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:05:09.660422    2915 main.go:141] libmachine: Using SSH client type: native
	I0723 07:05:09.660506    2915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc6a10] 0x102cc9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0723 07:05:09.660509    2915 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0723 07:05:09.713320    2915 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0723 07:05:09.713324    2915 buildroot.go:70] root file system type: tmpfs
	I0723 07:05:09.713371    2915 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0723 07:05:09.713416    2915 main.go:141] libmachine: Using SSH client type: native
	I0723 07:05:09.713510    2915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc6a10] 0x102cc9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0723 07:05:09.713541    2915 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0723 07:05:09.769201    2915 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0723 07:05:09.769265    2915 main.go:141] libmachine: Using SSH client type: native
	I0723 07:05:09.769381    2915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc6a10] 0x102cc9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0723 07:05:09.769387    2915 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0723 07:05:09.823440    2915 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 07:05:09.823448    2915 machine.go:97] duration metric: took 559.543834ms to provisionDockerMachine
	I0723 07:05:09.823453    2915 start.go:293] postStartSetup for "functional-693000" (driver="qemu2")
	I0723 07:05:09.823459    2915 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 07:05:09.823506    2915 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 07:05:09.823513    2915 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
	I0723 07:05:09.853935    2915 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 07:05:09.855505    2915 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 07:05:09.855509    2915 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19319-1567/.minikube/addons for local assets ...
	I0723 07:05:09.855583    2915 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19319-1567/.minikube/files for local assets ...
	I0723 07:05:09.855692    2915 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem -> 20652.pem in /etc/ssl/certs
	I0723 07:05:09.855806    2915 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/test/nested/copy/2065/hosts -> hosts in /etc/test/nested/copy/2065
	I0723 07:05:09.855837    2915 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2065
	I0723 07:05:09.859551    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem --> /etc/ssl/certs/20652.pem (1708 bytes)
	I0723 07:05:09.867662    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/test/nested/copy/2065/hosts --> /etc/test/nested/copy/2065/hosts (40 bytes)
	I0723 07:05:09.876109    2915 start.go:296] duration metric: took 52.651625ms for postStartSetup
	I0723 07:05:09.876120    2915 fix.go:56] duration metric: took 623.724625ms for fixHost
	I0723 07:05:09.876159    2915 main.go:141] libmachine: Using SSH client type: native
	I0723 07:05:09.876285    2915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc6a10] 0x102cc9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0723 07:05:09.876288    2915 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 07:05:09.929118    2915 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721743509.971855663
	
	I0723 07:05:09.929123    2915 fix.go:216] guest clock: 1721743509.971855663
	I0723 07:05:09.929127    2915 fix.go:229] Guest: 2024-07-23 07:05:09.971855663 -0700 PDT Remote: 2024-07-23 07:05:09.876121 -0700 PDT m=+0.725472293 (delta=95.734663ms)
	I0723 07:05:09.929143    2915 fix.go:200] guest clock delta is within tolerance: 95.734663ms
	I0723 07:05:09.929145    2915 start.go:83] releasing machines lock for "functional-693000", held for 676.758792ms
	I0723 07:05:09.929461    2915 ssh_runner.go:195] Run: cat /version.json
	I0723 07:05:09.929466    2915 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 07:05:09.929466    2915 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
	I0723 07:05:09.929485    2915 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
	I0723 07:05:09.962450    2915 ssh_runner.go:195] Run: systemctl --version
	I0723 07:05:09.965178    2915 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 07:05:10.008033    2915 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 07:05:10.008068    2915 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 07:05:10.011506    2915 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 07:05:10.011511    2915 start.go:495] detecting cgroup driver to use...
	I0723 07:05:10.011604    2915 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 07:05:10.018265    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0723 07:05:10.022099    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0723 07:05:10.025975    2915 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0723 07:05:10.026001    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0723 07:05:10.030008    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0723 07:05:10.033794    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0723 07:05:10.037475    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0723 07:05:10.041025    2915 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 07:05:10.044648    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0723 07:05:10.048214    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0723 07:05:10.052051    2915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0723 07:05:10.055948    2915 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 07:05:10.059776    2915 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 07:05:10.063839    2915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:05:10.182508    2915 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0723 07:05:10.194073    2915 start.go:495] detecting cgroup driver to use...
	I0723 07:05:10.194137    2915 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0723 07:05:10.200087    2915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 07:05:10.208330    2915 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 07:05:10.214514    2915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 07:05:10.220452    2915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0723 07:05:10.225683    2915 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 07:05:10.232260    2915 ssh_runner.go:195] Run: which cri-dockerd
	I0723 07:05:10.233821    2915 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0723 07:05:10.237049    2915 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0723 07:05:10.242939    2915 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0723 07:05:10.345747    2915 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0723 07:05:10.456278    2915 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0723 07:05:10.456332    2915 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0723 07:05:10.462688    2915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:05:10.571411    2915 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0723 07:05:22.860334    2915 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.289118917s)
	I0723 07:05:22.860407    2915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0723 07:05:22.866282    2915 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0723 07:05:22.877365    2915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0723 07:05:22.883359    2915 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0723 07:05:22.972239    2915 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0723 07:05:23.061903    2915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:05:23.151957    2915 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0723 07:05:23.158741    2915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0723 07:05:23.164575    2915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:05:23.271644    2915 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0723 07:05:23.299846    2915 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0723 07:05:23.299914    2915 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0723 07:05:23.302270    2915 start.go:563] Will wait 60s for crictl version
	I0723 07:05:23.302307    2915 ssh_runner.go:195] Run: which crictl
	I0723 07:05:23.303806    2915 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 07:05:23.320237    2915 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0723 07:05:23.320332    2915 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0723 07:05:23.327184    2915 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0723 07:05:23.343386    2915 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0723 07:05:23.343452    2915 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0723 07:05:23.348384    2915 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0723 07:05:23.352293    2915 kubeadm.go:883] updating cluster {Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 07:05:23.352363    2915 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:05:23.352429    2915 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0723 07:05:23.358667    2915 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-693000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0723 07:05:23.358671    2915 docker.go:615] Images already preloaded, skipping extraction
	I0723 07:05:23.358718    2915 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0723 07:05:23.364796    2915 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-693000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0723 07:05:23.364806    2915 cache_images.go:84] Images are preloaded, skipping loading
	I0723 07:05:23.364810    2915 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.30.3 docker true true} ...
	I0723 07:05:23.364861    2915 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-693000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 07:05:23.364915    2915 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0723 07:05:23.387953    2915 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0723 07:05:23.388004    2915 cni.go:84] Creating CNI manager for ""
	I0723 07:05:23.388011    2915 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:05:23.388018    2915 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 07:05:23.388027    2915 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-693000 NodeName:functional-693000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 07:05:23.388100    2915 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-693000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 07:05:23.388155    2915 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 07:05:23.392627    2915 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 07:05:23.392654    2915 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 07:05:23.395981    2915 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0723 07:05:23.401996    2915 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 07:05:23.407768    2915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0723 07:05:23.414059    2915 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0723 07:05:23.415544    2915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:05:23.503099    2915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 07:05:23.509246    2915 certs.go:68] Setting up /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000 for IP: 192.168.105.4
	I0723 07:05:23.509249    2915 certs.go:194] generating shared ca certs ...
	I0723 07:05:23.509257    2915 certs.go:226] acquiring lock for ca certs: {Name:mk3c99e95d37931a4e7b239d14c48fdfa53d0dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:05:23.509426    2915 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.key
	I0723 07:05:23.509480    2915 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.key
	I0723 07:05:23.509484    2915 certs.go:256] generating profile certs ...
	I0723 07:05:23.509552    2915 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.key
	I0723 07:05:23.509610    2915 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/apiserver.key.60d2d575
	I0723 07:05:23.509659    2915 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/proxy-client.key
	I0723 07:05:23.509824    2915 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065.pem (1338 bytes)
	W0723 07:05:23.509852    2915 certs.go:480] ignoring /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065_empty.pem, impossibly tiny 0 bytes
	I0723 07:05:23.509856    2915 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 07:05:23.509875    2915 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem (1078 bytes)
	I0723 07:05:23.509903    2915 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem (1123 bytes)
	I0723 07:05:23.509922    2915 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem (1679 bytes)
	I0723 07:05:23.509962    2915 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem (1708 bytes)
	I0723 07:05:23.510351    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 07:05:23.519082    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0723 07:05:23.527280    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 07:05:23.535726    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0723 07:05:23.543735    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 07:05:23.551856    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 07:05:23.560114    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 07:05:23.568101    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 07:05:23.575918    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 07:05:23.584104    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065.pem --> /usr/share/ca-certificates/2065.pem (1338 bytes)
	I0723 07:05:23.592340    2915 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem --> /usr/share/ca-certificates/20652.pem (1708 bytes)
	I0723 07:05:23.600108    2915 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 07:05:23.605901    2915 ssh_runner.go:195] Run: openssl version
	I0723 07:05:23.607722    2915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2065.pem && ln -fs /usr/share/ca-certificates/2065.pem /etc/ssl/certs/2065.pem"
	I0723 07:05:23.611622    2915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2065.pem
	I0723 07:05:23.613202    2915 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:03 /usr/share/ca-certificates/2065.pem
	I0723 07:05:23.613220    2915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2065.pem
	I0723 07:05:23.615395    2915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2065.pem /etc/ssl/certs/51391683.0"
	I0723 07:05:23.619009    2915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20652.pem && ln -fs /usr/share/ca-certificates/20652.pem /etc/ssl/certs/20652.pem"
	I0723 07:05:23.622936    2915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20652.pem
	I0723 07:05:23.624528    2915 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:03 /usr/share/ca-certificates/20652.pem
	I0723 07:05:23.624541    2915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20652.pem
	I0723 07:05:23.626534    2915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20652.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 07:05:23.630155    2915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 07:05:23.633854    2915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:05:23.635424    2915 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:05:23.635438    2915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:05:23.637525    2915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 07:05:23.640896    2915 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 07:05:23.642515    2915 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 07:05:23.644649    2915 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 07:05:23.646688    2915 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 07:05:23.649103    2915 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 07:05:23.651350    2915 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 07:05:23.653462    2915 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 07:05:23.655405    2915 kubeadm.go:392] StartCluster: {Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:05:23.655479    2915 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0723 07:05:23.661131    2915 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 07:05:23.664717    2915 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 07:05:23.664719    2915 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 07:05:23.664739    2915 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 07:05:23.667933    2915 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:05:23.668222    2915 kubeconfig.go:125] found "functional-693000" server: "https://192.168.105.4:8441"
	I0723 07:05:23.669464    2915 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 07:05:23.672954    2915 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0723 07:05:23.672957    2915 kubeadm.go:1160] stopping kube-system containers ...
	I0723 07:05:23.673004    2915 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0723 07:05:23.680327    2915 docker.go:483] Stopping containers: [d06ca4c31b6a bfa135ab8564 fe78313837dd 9b711e5a9798 dd144ad74b0e a7b5f789cf94 23075b787ec0 5e36a2b92c03 27d38e5dd5f4 94cb3a81d369 69c5edfdfe35 a0f48147282f 909da1e2fb1d 26dea5b62ddb d3bdb8bd4646 b638c6955f24 021b188a66a8 9370be275d48 73c0e03954dd 9ad3b123ca07 54bd0920404a d962ae0d75df b248b9e6f021 b4f35df9720e 9410f2b47d5d c3da7802bb5a 9b028b10e5ac 743d8ffd18d7 e9071c336dff]
	I0723 07:05:23.680378    2915 ssh_runner.go:195] Run: docker stop d06ca4c31b6a bfa135ab8564 fe78313837dd 9b711e5a9798 dd144ad74b0e a7b5f789cf94 23075b787ec0 5e36a2b92c03 27d38e5dd5f4 94cb3a81d369 69c5edfdfe35 a0f48147282f 909da1e2fb1d 26dea5b62ddb d3bdb8bd4646 b638c6955f24 021b188a66a8 9370be275d48 73c0e03954dd 9ad3b123ca07 54bd0920404a d962ae0d75df b248b9e6f021 b4f35df9720e 9410f2b47d5d c3da7802bb5a 9b028b10e5ac 743d8ffd18d7 e9071c336dff
	I0723 07:05:23.687594    2915 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 07:05:23.784012    2915 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 07:05:23.789386    2915 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 23 14:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Jul 23 14:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 23 14:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 23 14:04 /etc/kubernetes/scheduler.conf
	
	I0723 07:05:23.789416    2915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0723 07:05:23.793942    2915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0723 07:05:23.798566    2915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0723 07:05:23.803111    2915 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:05:23.803140    2915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 07:05:23.807319    2915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0723 07:05:23.811099    2915 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:05:23.811120    2915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 07:05:23.814847    2915 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 07:05:23.818456    2915 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:05:23.838366    2915 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:05:24.365812    2915 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:05:24.494287    2915 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:05:24.523465    2915 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:05:24.555180    2915 api_server.go:52] waiting for apiserver process to appear ...
	I0723 07:05:24.555241    2915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:05:25.057584    2915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:05:25.556220    2915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:05:25.561358    2915 api_server.go:72] duration metric: took 1.006195083s to wait for apiserver process to appear ...
	I0723 07:05:25.561368    2915 api_server.go:88] waiting for apiserver healthz status ...
	I0723 07:05:25.561377    2915 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0723 07:05:27.957660    2915 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 07:05:27.957668    2915 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 07:05:27.957673    2915 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0723 07:05:27.975716    2915 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 07:05:27.975726    2915 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 07:05:28.062335    2915 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0723 07:05:28.064948    2915 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 07:05:28.064954    2915 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 07:05:28.563357    2915 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0723 07:05:28.566002    2915 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 07:05:28.566010    2915 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 07:05:29.063367    2915 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0723 07:05:29.066430    2915 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 07:05:29.066440    2915 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 07:05:29.563253    2915 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0723 07:05:29.566464    2915 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0723 07:05:29.570631    2915 api_server.go:141] control plane version: v1.30.3
	I0723 07:05:29.570639    2915 api_server.go:131] duration metric: took 4.009335083s to wait for apiserver health ...
	I0723 07:05:29.570646    2915 cni.go:84] Creating CNI manager for ""
	I0723 07:05:29.570675    2915 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:05:29.575041    2915 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 07:05:29.578955    2915 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 07:05:29.582931    2915 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 07:05:29.588665    2915 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 07:05:29.593181    2915 system_pods.go:59] 7 kube-system pods found
	I0723 07:05:29.593190    2915 system_pods.go:61] "coredns-7db6d8ff4d-tjt9w" [aeed040b-e3a1-4ac7-bab7-d1f44d04a203] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 07:05:29.593193    2915 system_pods.go:61] "etcd-functional-693000" [d8dea9a0-48ed-4373-a1cf-431b06ee1834] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 07:05:29.593195    2915 system_pods.go:61] "kube-apiserver-functional-693000" [2c34b417-0aac-4dfb-aa00-623f06b92185] Pending
	I0723 07:05:29.593198    2915 system_pods.go:61] "kube-controller-manager-functional-693000" [89c6d109-8363-427a-8d84-310c3805e4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 07:05:29.593201    2915 system_pods.go:61] "kube-proxy-mxb8f" [ce195fa3-8107-4247-938a-472f38f13710] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 07:05:29.593203    2915 system_pods.go:61] "kube-scheduler-functional-693000" [c8e0a1d7-a2e0-4120-bd99-dd7df1f78de8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 07:05:29.593205    2915 system_pods.go:61] "storage-provisioner" [9f9ed311-30be-40e7-a66a-2171b56d51f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 07:05:29.593207    2915 system_pods.go:74] duration metric: took 4.538958ms to wait for pod list to return data ...
	I0723 07:05:29.593210    2915 node_conditions.go:102] verifying NodePressure condition ...
	I0723 07:05:29.594387    2915 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 07:05:29.594391    2915 node_conditions.go:123] node cpu capacity is 2
	I0723 07:05:29.594395    2915 node_conditions.go:105] duration metric: took 1.183875ms to run NodePressure ...
	I0723 07:05:29.594401    2915 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:05:29.818472    2915 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 07:05:29.828935    2915 kubeadm.go:739] kubelet initialised
	I0723 07:05:29.828940    2915 kubeadm.go:740] duration metric: took 10.459833ms waiting for restarted kubelet to initialise ...
	I0723 07:05:29.828943    2915 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 07:05:29.836167    2915 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace to be "Ready" ...
	I0723 07:05:31.841457    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:34.341048    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:36.841249    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:38.841343    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:41.341006    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:43.840780    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:45.841204    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:48.340641    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:50.841362    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:53.341066    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:55.341094    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:57.840493    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:05:59.840532    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:06:02.340964    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:06:04.840442    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:06:06.840716    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:06:09.339971    2915 pod_ready.go:102] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"False"
	I0723 07:06:10.340517    2915 pod_ready.go:92] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:10.340525    2915 pod_ready.go:81] duration metric: took 40.505038s for pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.340530    2915 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.342772    2915 pod_ready.go:92] pod "etcd-functional-693000" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:10.342775    2915 pod_ready.go:81] duration metric: took 2.242917ms for pod "etcd-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.342779    2915 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.345708    2915 pod_ready.go:92] pod "kube-apiserver-functional-693000" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:10.345711    2915 pod_ready.go:81] duration metric: took 2.930458ms for pod "kube-apiserver-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.345714    2915 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.347717    2915 pod_ready.go:92] pod "kube-controller-manager-functional-693000" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:10.347720    2915 pod_ready.go:81] duration metric: took 2.003458ms for pod "kube-controller-manager-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.347726    2915 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mxb8f" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.349609    2915 pod_ready.go:92] pod "kube-proxy-mxb8f" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:10.349612    2915 pod_ready.go:81] duration metric: took 1.883625ms for pod "kube-proxy-mxb8f" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.349615    2915 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.741407    2915 pod_ready.go:92] pod "kube-scheduler-functional-693000" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:10.741414    2915 pod_ready.go:81] duration metric: took 391.803458ms for pod "kube-scheduler-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:10.741419    2915 pod_ready.go:38] duration metric: took 40.913166083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 07:06:10.741430    2915 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 07:06:10.745997    2915 ops.go:34] apiserver oom_adj: -16
	I0723 07:06:10.746001    2915 kubeadm.go:597] duration metric: took 47.08207925s to restartPrimaryControlPlane
	I0723 07:06:10.746004    2915 kubeadm.go:394] duration metric: took 47.091400875s to StartCluster
	I0723 07:06:10.746012    2915 settings.go:142] acquiring lock: {Name:mkd8f4c38e79948dfc5500ad891e72aa4257d24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:06:10.746114    2915 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:06:10.746451    2915 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/kubeconfig: {Name:mkd61b3eb94b798a54b8f29057406aee7268d37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:06:10.746706    2915 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:06:10.746717    2915 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 07:06:10.746751    2915 addons.go:69] Setting storage-provisioner=true in profile "functional-693000"
	I0723 07:06:10.746762    2915 addons.go:234] Setting addon storage-provisioner=true in "functional-693000"
	W0723 07:06:10.746764    2915 addons.go:243] addon storage-provisioner should already be in state true
	I0723 07:06:10.746775    2915 host.go:66] Checking if "functional-693000" exists ...
	I0723 07:06:10.746787    2915 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:06:10.746802    2915 addons.go:69] Setting default-storageclass=true in profile "functional-693000"
	I0723 07:06:10.746861    2915 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-693000"
	I0723 07:06:10.747763    2915 addons.go:234] Setting addon default-storageclass=true in "functional-693000"
	W0723 07:06:10.747765    2915 addons.go:243] addon default-storageclass should already be in state true
	I0723 07:06:10.747771    2915 host.go:66] Checking if "functional-693000" exists ...
	I0723 07:06:10.751100    2915 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 07:06:10.751105    2915 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 07:06:10.751110    2915 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
	I0723 07:06:10.753682    2915 out.go:177] * Verifying Kubernetes components...
	I0723 07:06:10.756724    2915 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:06:10.760801    2915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:06:10.764699    2915 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 07:06:10.764703    2915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 07:06:10.764708    2915 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
	I0723 07:06:10.876126    2915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 07:06:10.881937    2915 node_ready.go:35] waiting up to 6m0s for node "functional-693000" to be "Ready" ...
	I0723 07:06:10.883366    2915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 07:06:10.937484    2915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 07:06:10.939033    2915 node_ready.go:49] node "functional-693000" has status "Ready":"True"
	I0723 07:06:10.939037    2915 node_ready.go:38] duration metric: took 57.094167ms for node "functional-693000" to be "Ready" ...
	I0723 07:06:10.939040    2915 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 07:06:11.141982    2915 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:11.217989    2915 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0723 07:06:11.225853    2915 addons.go:510] duration metric: took 479.147791ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0723 07:06:11.541028    2915 pod_ready.go:92] pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:11.541036    2915 pod_ready.go:81] duration metric: took 399.052625ms for pod "coredns-7db6d8ff4d-tjt9w" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:11.541041    2915 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:11.941296    2915 pod_ready.go:92] pod "etcd-functional-693000" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:11.941303    2915 pod_ready.go:81] duration metric: took 400.266375ms for pod "etcd-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:11.941308    2915 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:12.340854    2915 pod_ready.go:92] pod "kube-apiserver-functional-693000" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:12.340860    2915 pod_ready.go:81] duration metric: took 399.556167ms for pod "kube-apiserver-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:12.340865    2915 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:12.741195    2915 pod_ready.go:92] pod "kube-controller-manager-functional-693000" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:12.741201    2915 pod_ready.go:81] duration metric: took 400.340542ms for pod "kube-controller-manager-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:12.741205    2915 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxb8f" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:13.140952    2915 pod_ready.go:92] pod "kube-proxy-mxb8f" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:13.140959    2915 pod_ready.go:81] duration metric: took 399.757584ms for pod "kube-proxy-mxb8f" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:13.140962    2915 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:13.540695    2915 pod_ready.go:92] pod "kube-scheduler-functional-693000" in "kube-system" namespace has status "Ready":"True"
	I0723 07:06:13.540700    2915 pod_ready.go:81] duration metric: took 399.742459ms for pod "kube-scheduler-functional-693000" in "kube-system" namespace to be "Ready" ...
	I0723 07:06:13.540704    2915 pod_ready.go:38] duration metric: took 2.601704708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 07:06:13.540714    2915 api_server.go:52] waiting for apiserver process to appear ...
	I0723 07:06:13.540794    2915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:06:13.546451    2915 api_server.go:72] duration metric: took 2.79978275s to wait for apiserver process to appear ...
	I0723 07:06:13.546456    2915 api_server.go:88] waiting for apiserver healthz status ...
	I0723 07:06:13.546463    2915 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0723 07:06:13.548914    2915 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0723 07:06:13.549468    2915 api_server.go:141] control plane version: v1.30.3
	I0723 07:06:13.549472    2915 api_server.go:131] duration metric: took 3.014042ms to wait for apiserver health ...
	I0723 07:06:13.549474    2915 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 07:06:13.743406    2915 system_pods.go:59] 7 kube-system pods found
	I0723 07:06:13.743415    2915 system_pods.go:61] "coredns-7db6d8ff4d-tjt9w" [aeed040b-e3a1-4ac7-bab7-d1f44d04a203] Running
	I0723 07:06:13.743417    2915 system_pods.go:61] "etcd-functional-693000" [d8dea9a0-48ed-4373-a1cf-431b06ee1834] Running
	I0723 07:06:13.743419    2915 system_pods.go:61] "kube-apiserver-functional-693000" [2c34b417-0aac-4dfb-aa00-623f06b92185] Running
	I0723 07:06:13.743421    2915 system_pods.go:61] "kube-controller-manager-functional-693000" [89c6d109-8363-427a-8d84-310c3805e4f5] Running
	I0723 07:06:13.743422    2915 system_pods.go:61] "kube-proxy-mxb8f" [ce195fa3-8107-4247-938a-472f38f13710] Running
	I0723 07:06:13.743423    2915 system_pods.go:61] "kube-scheduler-functional-693000" [c8e0a1d7-a2e0-4120-bd99-dd7df1f78de8] Running
	I0723 07:06:13.743424    2915 system_pods.go:61] "storage-provisioner" [9f9ed311-30be-40e7-a66a-2171b56d51f7] Running
	I0723 07:06:13.743427    2915 system_pods.go:74] duration metric: took 193.954292ms to wait for pod list to return data ...
	I0723 07:06:13.743431    2915 default_sa.go:34] waiting for default service account to be created ...
	I0723 07:06:13.941043    2915 default_sa.go:45] found service account: "default"
	I0723 07:06:13.941051    2915 default_sa.go:55] duration metric: took 197.620042ms for default service account to be created ...
	I0723 07:06:13.941054    2915 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 07:06:14.142521    2915 system_pods.go:86] 7 kube-system pods found
	I0723 07:06:14.142527    2915 system_pods.go:89] "coredns-7db6d8ff4d-tjt9w" [aeed040b-e3a1-4ac7-bab7-d1f44d04a203] Running
	I0723 07:06:14.142529    2915 system_pods.go:89] "etcd-functional-693000" [d8dea9a0-48ed-4373-a1cf-431b06ee1834] Running
	I0723 07:06:14.142531    2915 system_pods.go:89] "kube-apiserver-functional-693000" [2c34b417-0aac-4dfb-aa00-623f06b92185] Running
	I0723 07:06:14.142532    2915 system_pods.go:89] "kube-controller-manager-functional-693000" [89c6d109-8363-427a-8d84-310c3805e4f5] Running
	I0723 07:06:14.142534    2915 system_pods.go:89] "kube-proxy-mxb8f" [ce195fa3-8107-4247-938a-472f38f13710] Running
	I0723 07:06:14.142535    2915 system_pods.go:89] "kube-scheduler-functional-693000" [c8e0a1d7-a2e0-4120-bd99-dd7df1f78de8] Running
	I0723 07:06:14.142536    2915 system_pods.go:89] "storage-provisioner" [9f9ed311-30be-40e7-a66a-2171b56d51f7] Running
	I0723 07:06:14.142539    2915 system_pods.go:126] duration metric: took 201.485542ms to wait for k8s-apps to be running ...
	I0723 07:06:14.142541    2915 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 07:06:14.142601    2915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 07:06:14.148833    2915 system_svc.go:56] duration metric: took 6.289708ms WaitForService to wait for kubelet
	I0723 07:06:14.148840    2915 kubeadm.go:582] duration metric: took 3.402183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:06:14.148849    2915 node_conditions.go:102] verifying NodePressure condition ...
	I0723 07:06:14.341284    2915 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 07:06:14.341288    2915 node_conditions.go:123] node cpu capacity is 2
	I0723 07:06:14.341294    2915 node_conditions.go:105] duration metric: took 192.446334ms to run NodePressure ...
	I0723 07:06:14.341299    2915 start.go:241] waiting for startup goroutines ...
	I0723 07:06:14.341302    2915 start.go:246] waiting for cluster config update ...
	I0723 07:06:14.341308    2915 start.go:255] writing updated cluster config ...
	I0723 07:06:14.341651    2915 ssh_runner.go:195] Run: rm -f paused
	I0723 07:06:14.371992    2915 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0723 07:06:14.375626    2915 out.go:177] * Done! kubectl is now configured to use "functional-693000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 23 14:06:53 functional-693000 dockerd[5857]: time="2024-07-23T14:06:53.506326902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 23 14:06:53 functional-693000 cri-dockerd[6134]: time="2024-07-23T14:06:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0fe2c20960c628cce82c1f5aeec81bf56770b51d563e5d1ea1ebafb1daeec642/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 23 14:06:54 functional-693000 cri-dockerd[6134]: time="2024-07-23T14:06:54Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Jul 23 14:06:54 functional-693000 dockerd[5857]: time="2024-07-23T14:06:54.291630300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 23 14:06:54 functional-693000 dockerd[5857]: time="2024-07-23T14:06:54.291790716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 23 14:06:54 functional-693000 dockerd[5857]: time="2024-07-23T14:06:54.291909715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 23 14:06:54 functional-693000 dockerd[5857]: time="2024-07-23T14:06:54.291977131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 23 14:07:01 functional-693000 dockerd[5857]: time="2024-07-23T14:07:01.880470666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 23 14:07:01 functional-693000 dockerd[5857]: time="2024-07-23T14:07:01.880525624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 23 14:07:01 functional-693000 dockerd[5857]: time="2024-07-23T14:07:01.880695081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 23 14:07:01 functional-693000 dockerd[5857]: time="2024-07-23T14:07:01.880788372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 23 14:07:01 functional-693000 cri-dockerd[6134]: time="2024-07-23T14:07:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba979b1894d22b457826503fc1dbf408a872c66f9cc71830489a93abeeb876a0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 23 14:07:02 functional-693000 cri-dockerd[6134]: time="2024-07-23T14:07:02Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jul 23 14:07:02 functional-693000 dockerd[5857]: time="2024-07-23T14:07:02.975633634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 23 14:07:02 functional-693000 dockerd[5857]: time="2024-07-23T14:07:02.975705426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 23 14:07:02 functional-693000 dockerd[5857]: time="2024-07-23T14:07:02.975881091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 23 14:07:02 functional-693000 dockerd[5857]: time="2024-07-23T14:07:02.975949299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 23 14:07:03 functional-693000 dockerd[5851]: time="2024-07-23T14:07:03.008764204Z" level=info msg="ignoring event" container=6935a45860b4f68159d6b99d925edefbb501e1ba4642eb7973aef0c91ab39572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 23 14:07:03 functional-693000 dockerd[5857]: time="2024-07-23T14:07:03.008846412Z" level=info msg="shim disconnected" id=6935a45860b4f68159d6b99d925edefbb501e1ba4642eb7973aef0c91ab39572 namespace=moby
	Jul 23 14:07:03 functional-693000 dockerd[5857]: time="2024-07-23T14:07:03.008890578Z" level=warning msg="cleaning up after shim disconnected" id=6935a45860b4f68159d6b99d925edefbb501e1ba4642eb7973aef0c91ab39572 namespace=moby
	Jul 23 14:07:03 functional-693000 dockerd[5857]: time="2024-07-23T14:07:03.008894620Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 23 14:07:04 functional-693000 dockerd[5851]: time="2024-07-23T14:07:04.228617695Z" level=info msg="ignoring event" container=ba979b1894d22b457826503fc1dbf408a872c66f9cc71830489a93abeeb876a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 23 14:07:04 functional-693000 dockerd[5857]: time="2024-07-23T14:07:04.228689945Z" level=info msg="shim disconnected" id=ba979b1894d22b457826503fc1dbf408a872c66f9cc71830489a93abeeb876a0 namespace=moby
	Jul 23 14:07:04 functional-693000 dockerd[5857]: time="2024-07-23T14:07:04.228715111Z" level=warning msg="cleaning up after shim disconnected" id=ba979b1894d22b457826503fc1dbf408a872c66f9cc71830489a93abeeb876a0 namespace=moby
	Jul 23 14:07:04 functional-693000 dockerd[5857]: time="2024-07-23T14:07:04.228719195Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6935a45860b4f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 seconds ago        Exited              mount-munger              0                   ba979b1894d22       busybox-mount
	0417453d55302       nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe                         12 seconds ago       Running             myfrontend                0                   0fe2c20960c62       sp-pod
	79a913ed94200       72565bf5bbedf                                                                                         19 seconds ago       Exited              echoserver-arm            2                   81b51a6b39345       hello-node-connect-6f49f58cd5-4tqb7
	5a16f420c8df8       72565bf5bbedf                                                                                         26 seconds ago       Exited              echoserver-arm            2                   efed795ef6616       hello-node-65f5d5cc78-r5f7z
	976b9d96b479e       nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e                         38 seconds ago       Running             nginx                     0                   a84abb1945457       nginx-svc
	cd1beec5e857c       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   5bd21af473e45       storage-provisioner
	de8468f94062a       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   2bdd293c9ceec       coredns-7db6d8ff4d-tjt9w
	87cdba4036e4b       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   5bd21af473e45       storage-provisioner
	6e2554a526867       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   39c5f05b4a08e       kube-proxy-mxb8f
	ee339be79c696       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   e6c632eb7807c       etcd-functional-693000
	29d88ba898da4       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   7bc0045b3d1bc       kube-controller-manager-functional-693000
	bc2e0a926af16       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   63b189b833f1e       kube-scheduler-functional-693000
	cd55f00f1033f       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   4eeb0bd1f4994       kube-apiserver-functional-693000
	d06ca4c31b6a0       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   9b711e5a97980       coredns-7db6d8ff4d-tjt9w
	bfa135ab85640       2351f570ed0ea                                                                                         2 minutes ago        Exited              kube-proxy                1                   dd144ad74b0ee       kube-proxy-mxb8f
	23075b787ec09       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   5e36a2b92c032       kube-scheduler-functional-693000
	27d38e5dd5f41       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   26dea5b62ddbd       kube-controller-manager-functional-693000
	94cb3a81d3695       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   909da1e2fb1d1       etcd-functional-693000
	
	
	==> coredns [d06ca4c31b6a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59634 - 37301 "HINFO IN 3012370217560221957.6551832139654426018. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004243687s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [de8468f94062] <==
	[INFO] 127.0.0.1:54350 - 51437 "HINFO IN 7420520846878344145.5783731640377416684. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011858646s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1746419126]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (23-Jul-2024 14:05:29.127) (total time: 30000ms):
	Trace[1746419126]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:05:59.127)
	Trace[1746419126]: [30.000355033s] [30.000355033s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1959856931]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (23-Jul-2024 14:05:29.127) (total time: 30000ms):
	Trace[1959856931]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:05:59.127)
	Trace[1959856931]: [30.000740491s] [30.000740491s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1319310487]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (23-Jul-2024 14:05:29.127) (total time: 30000ms):
	Trace[1319310487]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:05:59.128)
	Trace[1319310487]: [30.000604948s] [30.000604948s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.1:26146 - 28890 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000108374s
	[INFO] 10.244.0.1:64493 - 51830 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000099958s
	[INFO] 10.244.0.1:51416 - 64494 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000033541s
	[INFO] 10.244.0.1:9198 - 17822 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001039288s
	[INFO] 10.244.0.1:56969 - 35107 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000076833s
	[INFO] 10.244.0.1:26793 - 30556 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000112166s
	
	
	==> describe nodes <==
	Name:               functional-693000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-693000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=functional-693000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T07_04_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:04:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-693000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:07:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:06:59 +0000   Tue, 23 Jul 2024 14:04:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:06:59 +0000   Tue, 23 Jul 2024 14:04:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:06:59 +0000   Tue, 23 Jul 2024 14:04:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:06:59 +0000   Tue, 23 Jul 2024 14:04:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-693000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0611e83bc1b4ffda23433dac98743b8
	  System UUID:                d0611e83bc1b4ffda23433dac98743b8
	  Boot ID:                    94802c6e-2503-46b5-abbe-925a6ff62454
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-r5f7z                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  default                     hello-node-connect-6f49f58cd5-4tqb7          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  kube-system                 coredns-7db6d8ff4d-tjt9w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m45s
	  kube-system                 etcd-functional-693000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m
	  kube-system                 kube-apiserver-functional-693000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-controller-manager-functional-693000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-proxy-mxb8f                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-scheduler-functional-693000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m44s                  kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  Starting                 2m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m4s                   kubelet          Node functional-693000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m                     kubelet          Node functional-693000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m                     kubelet          Node functional-693000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m                     kubelet          Node functional-693000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m                     kubelet          Starting kubelet.
	  Normal  NodeReady                2m56s                  kubelet          Node functional-693000 status is now: NodeReady
	  Normal  RegisteredNode           2m46s                  node-controller  Node functional-693000 event: Registered Node functional-693000 in Controller
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node functional-693000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node functional-693000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m27s (x7 over 2m27s)  kubelet          Node functional-693000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m12s                  node-controller  Node functional-693000 event: Registered Node functional-693000 in Controller
	  Normal  Starting                 102s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)    kubelet          Node functional-693000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)    kubelet          Node functional-693000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)    kubelet          Node functional-693000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                    node-controller  Node functional-693000 event: Registered Node functional-693000 in Controller
	
	
	==> dmesg <==
	[ +14.183654] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.383684] systemd-fstab-generator[4938]: Ignoring "noauto" option for root device
	[Jul23 14:05] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[  +0.055965] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.112245] systemd-fstab-generator[5408]: Ignoring "noauto" option for root device
	[  +0.107084] systemd-fstab-generator[5420]: Ignoring "noauto" option for root device
	[  +0.120107] systemd-fstab-generator[5434]: Ignoring "noauto" option for root device
	[  +5.105766] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.306503] systemd-fstab-generator[6087]: Ignoring "noauto" option for root device
	[  +0.090952] systemd-fstab-generator[6099]: Ignoring "noauto" option for root device
	[  +0.089789] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.119215] systemd-fstab-generator[6126]: Ignoring "noauto" option for root device
	[  +0.235018] systemd-fstab-generator[6297]: Ignoring "noauto" option for root device
	[  +0.979651] systemd-fstab-generator[6423]: Ignoring "noauto" option for root device
	[  +4.463458] kauditd_printk_skb: 200 callbacks suppressed
	[ +11.588449] kauditd_printk_skb: 30 callbacks suppressed
	[Jul23 14:06] systemd-fstab-generator[7614]: Ignoring "noauto" option for root device
	[  +5.034365] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.450373] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.630035] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.802682] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.857640] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.997407] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.885862] kauditd_printk_skb: 4 callbacks suppressed
	[Jul23 14:07] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [94cb3a81d369] <==
	{"level":"info","ts":"2024-07-23T14:04:40.252899Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T14:04:41.399848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T14:04:41.399993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T14:04:41.400061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-23T14:04:41.400471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T14:04:41.400519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-23T14:04:41.400549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-23T14:04:41.400569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-23T14:04:41.402915Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-693000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T14:04:41.403057Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:04:41.403562Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T14:04:41.40361Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T14:04:41.403644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:04:41.408114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T14:04:41.408118Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-23T14:05:10.654342Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-23T14:05:10.654369Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-693000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-23T14:05:10.654398Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:05:10.654437Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:05:10.663559Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:05:10.663581Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-23T14:05:10.663613Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-23T14:05:10.665849Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-23T14:05:10.665911Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-23T14:05:10.665916Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-693000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [ee339be79c69] <==
	{"level":"info","ts":"2024-07-23T14:05:25.703843Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T14:05:25.703847Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T14:05:25.703953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-23T14:05:25.703974Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-23T14:05:25.704026Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:05:25.704026Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T14:05:25.704037Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:05:25.704109Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T14:05:25.704118Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T14:05:25.70415Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-23T14:05:25.704153Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-23T14:05:27.472101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-23T14:05:27.472311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-23T14:05:27.47238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-23T14:05:27.472414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-23T14:05:27.472439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-23T14:05:27.472482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-23T14:05:27.472505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-23T14:05:27.477568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:05:27.478182Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:05:27.478757Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T14:05:27.478965Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T14:05:27.47756Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-693000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T14:05:27.482071Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-23T14:05:27.482813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:07:06 up 3 min,  0 users,  load average: 0.32, 0.40, 0.18
	Linux functional-693000 5.10.207 #1 SMP PREEMPT Tue Jul 23 01:19:38 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cd55f00f1033] <==
	I0723 14:05:28.092833       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0723 14:05:28.093215       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0723 14:05:28.092897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0723 14:05:28.093052       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0723 14:05:28.093751       1 aggregator.go:165] initial CRD sync complete...
	I0723 14:05:28.093778       1 autoregister_controller.go:141] Starting autoregister controller
	I0723 14:05:28.093802       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0723 14:05:28.093819       1 cache.go:39] Caches are synced for autoregister controller
	I0723 14:05:28.093890       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 14:05:28.096304       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0723 14:05:28.127049       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 14:05:28.996255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0723 14:05:29.199870       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0723 14:05:29.200348       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 14:05:29.201718       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0723 14:05:29.674172       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0723 14:05:29.677797       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0723 14:05:29.688027       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 14:05:29.699257       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0723 14:05:29.701492       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0723 14:06:15.889811       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.114.167"}
	I0723 14:06:21.294863       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0723 14:06:21.337875       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.111.177"}
	I0723 14:06:25.372592       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.188.89"}
	I0723 14:06:34.771996       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.243.82"}
	
	
	==> kube-controller-manager [27d38e5dd5f4] <==
	I0723 14:04:54.194981       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-693000"
	I0723 14:04:54.195002       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0723 14:04:54.208118       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0723 14:04:54.208122       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0723 14:04:54.211332       1 shared_informer.go:320] Caches are synced for cronjob
	I0723 14:04:54.212508       1 shared_informer.go:320] Caches are synced for TTL
	I0723 14:04:54.230335       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0723 14:04:54.232879       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0723 14:04:54.292682       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0723 14:04:54.309970       1 shared_informer.go:320] Caches are synced for HPA
	I0723 14:04:54.312140       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0723 14:04:54.312175       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0723 14:04:54.312199       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0723 14:04:54.312203       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0723 14:04:54.361211       1 shared_informer.go:320] Caches are synced for namespace
	I0723 14:04:54.367551       1 shared_informer.go:320] Caches are synced for service account
	I0723 14:04:54.434249       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 14:04:54.466473       1 shared_informer.go:320] Caches are synced for disruption
	I0723 14:04:54.476940       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 14:04:54.480226       1 shared_informer.go:320] Caches are synced for deployment
	I0723 14:04:54.533122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="300.222153ms"
	I0723 14:04:54.533216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.542µs"
	I0723 14:04:54.845059       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:04:54.887316       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:04:54.887356       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [29d88ba898da] <==
	I0723 14:05:40.532640       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0723 14:05:40.532689       1 shared_informer.go:320] Caches are synced for attach detach
	I0723 14:05:40.896903       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:05:40.927179       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:05:40.927199       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0723 14:06:09.895915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.376652ms"
	I0723 14:06:09.896068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.917µs"
	I0723 14:06:21.304652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="7.968942ms"
	I0723 14:06:21.310396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="5.461711ms"
	I0723 14:06:21.310517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="11.583µs"
	I0723 14:06:21.313861       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="16.209µs"
	I0723 14:06:26.967236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="43.083µs"
	I0723 14:06:27.972206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.583µs"
	I0723 14:06:28.980407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.583µs"
	I0723 14:06:34.741349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="9.713426ms"
	I0723 14:06:34.746493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="5.118107ms"
	I0723 14:06:34.746532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24µs"
	I0723 14:06:34.750540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="14.375µs"
	I0723 14:06:36.016143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="28.458µs"
	I0723 14:06:37.022085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.916µs"
	I0723 14:06:41.047829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24.334µs"
	I0723 14:06:47.602783       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="27.75µs"
	I0723 14:06:48.088714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.291µs"
	I0723 14:06:56.600425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="26.416µs"
	I0723 14:07:02.601239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.75µs"
	
	
	==> kube-proxy [6e2554a52686] <==
	I0723 14:05:29.164315       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:05:29.170075       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0723 14:05:29.183581       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:05:29.183601       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:05:29.183610       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:05:29.184292       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:05:29.184381       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:05:29.184387       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:05:29.184753       1 config.go:192] "Starting service config controller"
	I0723 14:05:29.184763       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:05:29.184772       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:05:29.184774       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:05:29.185433       1 config.go:319] "Starting node config controller"
	I0723 14:05:29.185466       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:05:29.285028       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 14:05:29.285030       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:05:29.285553       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [bfa135ab8564] <==
	I0723 14:04:42.940902       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:04:42.951527       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0723 14:04:42.965598       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:04:42.965617       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:04:42.965626       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:04:42.966650       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:04:42.966711       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:04:42.966716       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:04:42.967257       1 config.go:192] "Starting service config controller"
	I0723 14:04:42.967261       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:04:42.967270       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:04:42.967272       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:04:42.967434       1 config.go:319] "Starting node config controller"
	I0723 14:04:42.967437       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:04:43.067712       1 shared_informer.go:320] Caches are synced for node config
	I0723 14:04:43.067712       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:04:43.067723       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [23075b787ec0] <==
	I0723 14:04:40.822789       1 serving.go:380] Generated self-signed cert in-memory
	W0723 14:04:41.956027       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 14:04:41.956043       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:04:41.956047       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 14:04:41.956050       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 14:04:41.982218       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 14:04:41.982333       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:04:41.985114       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 14:04:41.985577       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 14:04:41.985626       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:04:41.985705       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 14:04:42.085921       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:05:10.649357       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0723 14:05:10.649389       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0723 14:05:10.649464       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bc2e0a926af1] <==
	I0723 14:05:26.088394       1 serving.go:380] Generated self-signed cert in-memory
	I0723 14:05:28.043350       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 14:05:28.043362       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:05:28.044929       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0723 14:05:28.044941       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0723 14:05:28.044997       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 14:05:28.045004       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:05:28.045008       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0723 14:05:28.045011       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0723 14:05:28.045075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 14:05:28.045143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 14:05:28.145072       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0723 14:05:28.145072       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0723 14:05:28.145086       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 14:06:53 functional-693000 kubelet[6430]: I0723 14:06:53.105378    6430 scope.go:117] "RemoveContainer" containerID="1235a5276f8ab44ba596aeb73169de654307c1952d764a72eb1d0f9e7a59baa1"
	Jul 23 14:06:53 functional-693000 kubelet[6430]: I0723 14:06:53.112316    6430 scope.go:117] "RemoveContainer" containerID="1235a5276f8ab44ba596aeb73169de654307c1952d764a72eb1d0f9e7a59baa1"
	Jul 23 14:06:53 functional-693000 kubelet[6430]: E0723 14:06:53.112757    6430 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1235a5276f8ab44ba596aeb73169de654307c1952d764a72eb1d0f9e7a59baa1" containerID="1235a5276f8ab44ba596aeb73169de654307c1952d764a72eb1d0f9e7a59baa1"
	Jul 23 14:06:53 functional-693000 kubelet[6430]: I0723 14:06:53.112777    6430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1235a5276f8ab44ba596aeb73169de654307c1952d764a72eb1d0f9e7a59baa1"} err="failed to get container status \"1235a5276f8ab44ba596aeb73169de654307c1952d764a72eb1d0f9e7a59baa1\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1235a5276f8ab44ba596aeb73169de654307c1952d764a72eb1d0f9e7a59baa1"
	Jul 23 14:06:53 functional-693000 kubelet[6430]: I0723 14:06:53.180289    6430 topology_manager.go:215] "Topology Admit Handler" podUID="4cb3d115-1363-4a6f-8cdf-08b1b928f31c" podNamespace="default" podName="sp-pod"
	Jul 23 14:06:53 functional-693000 kubelet[6430]: E0723 14:06:53.180333    6430 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34ad0528-80e4-44a2-802b-444472ec4394" containerName="myfrontend"
	Jul 23 14:06:53 functional-693000 kubelet[6430]: I0723 14:06:53.180349    6430 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ad0528-80e4-44a2-802b-444472ec4394" containerName="myfrontend"
	Jul 23 14:06:53 functional-693000 kubelet[6430]: I0723 14:06:53.347562    6430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb5986ef-dbca-48b1-8c33-088a39e98adf\" (UniqueName: \"kubernetes.io/host-path/4cb3d115-1363-4a6f-8cdf-08b1b928f31c-pvc-eb5986ef-dbca-48b1-8c33-088a39e98adf\") pod \"sp-pod\" (UID: \"4cb3d115-1363-4a6f-8cdf-08b1b928f31c\") " pod="default/sp-pod"
	Jul 23 14:06:53 functional-693000 kubelet[6430]: I0723 14:06:53.347583    6430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq69f\" (UniqueName: \"kubernetes.io/projected/4cb3d115-1363-4a6f-8cdf-08b1b928f31c-kube-api-access-jq69f\") pod \"sp-pod\" (UID: \"4cb3d115-1363-4a6f-8cdf-08b1b928f31c\") " pod="default/sp-pod"
	Jul 23 14:06:54 functional-693000 kubelet[6430]: I0723 14:06:54.598528    6430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ad0528-80e4-44a2-802b-444472ec4394" path="/var/lib/kubelet/pods/34ad0528-80e4-44a2-802b-444472ec4394/volumes"
	Jul 23 14:06:56 functional-693000 kubelet[6430]: I0723 14:06:56.594114    6430 scope.go:117] "RemoveContainer" containerID="5a16f420c8df8f4517e976eebb332afa293cb1d24e684e2acf068a06f8866d85"
	Jul 23 14:06:56 functional-693000 kubelet[6430]: E0723 14:06:56.594237    6430 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-r5f7z_default(f8089f8d-d588-4543-acc9-c50832cf5440)\"" pod="default/hello-node-65f5d5cc78-r5f7z" podUID="f8089f8d-d588-4543-acc9-c50832cf5440"
	Jul 23 14:06:56 functional-693000 kubelet[6430]: I0723 14:06:56.600089    6430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.891423453 podStartE2EDuration="3.600074274s" podCreationTimestamp="2024-07-23 14:06:53 +0000 UTC" firstStartedPulling="2024-07-23 14:06:53.55856571 +0000 UTC m=+89.024603527" lastFinishedPulling="2024-07-23 14:06:54.267216489 +0000 UTC m=+89.733254348" observedRunningTime="2024-07-23 14:06:55.123228064 +0000 UTC m=+90.589265922" watchObservedRunningTime="2024-07-23 14:06:56.600074274 +0000 UTC m=+92.066112091"
	Jul 23 14:07:01 functional-693000 kubelet[6430]: I0723 14:07:01.556881    6430 topology_manager.go:215] "Topology Admit Handler" podUID="1d2d4e65-c52e-46a3-b61b-0c16460d1668" podNamespace="default" podName="busybox-mount"
	Jul 23 14:07:01 functional-693000 kubelet[6430]: I0723 14:07:01.601280    6430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1d2d4e65-c52e-46a3-b61b-0c16460d1668-test-volume\") pod \"busybox-mount\" (UID: \"1d2d4e65-c52e-46a3-b61b-0c16460d1668\") " pod="default/busybox-mount"
	Jul 23 14:07:01 functional-693000 kubelet[6430]: I0723 14:07:01.601304    6430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kln4\" (UniqueName: \"kubernetes.io/projected/1d2d4e65-c52e-46a3-b61b-0c16460d1668-kube-api-access-8kln4\") pod \"busybox-mount\" (UID: \"1d2d4e65-c52e-46a3-b61b-0c16460d1668\") " pod="default/busybox-mount"
	Jul 23 14:07:02 functional-693000 kubelet[6430]: I0723 14:07:02.594092    6430 scope.go:117] "RemoveContainer" containerID="79a913ed9420058d7bfdcfa35adca38bf81058133d221082c17a6ab656b10449"
	Jul 23 14:07:02 functional-693000 kubelet[6430]: E0723 14:07:02.594180    6430 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-4tqb7_default(7e6e0d75-942f-4698-93e3-d211fe701486)\"" pod="default/hello-node-connect-6f49f58cd5-4tqb7" podUID="7e6e0d75-942f-4698-93e3-d211fe701486"
	Jul 23 14:07:04 functional-693000 kubelet[6430]: I0723 14:07:04.414061    6430 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1d2d4e65-c52e-46a3-b61b-0c16460d1668-test-volume\") pod \"1d2d4e65-c52e-46a3-b61b-0c16460d1668\" (UID: \"1d2d4e65-c52e-46a3-b61b-0c16460d1668\") "
	Jul 23 14:07:04 functional-693000 kubelet[6430]: I0723 14:07:04.414082    6430 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kln4\" (UniqueName: \"kubernetes.io/projected/1d2d4e65-c52e-46a3-b61b-0c16460d1668-kube-api-access-8kln4\") pod \"1d2d4e65-c52e-46a3-b61b-0c16460d1668\" (UID: \"1d2d4e65-c52e-46a3-b61b-0c16460d1668\") "
	Jul 23 14:07:04 functional-693000 kubelet[6430]: I0723 14:07:04.414232    6430 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d2d4e65-c52e-46a3-b61b-0c16460d1668-test-volume" (OuterVolumeSpecName: "test-volume") pod "1d2d4e65-c52e-46a3-b61b-0c16460d1668" (UID: "1d2d4e65-c52e-46a3-b61b-0c16460d1668"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 23 14:07:04 functional-693000 kubelet[6430]: I0723 14:07:04.416873    6430 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2d4e65-c52e-46a3-b61b-0c16460d1668-kube-api-access-8kln4" (OuterVolumeSpecName: "kube-api-access-8kln4") pod "1d2d4e65-c52e-46a3-b61b-0c16460d1668" (UID: "1d2d4e65-c52e-46a3-b61b-0c16460d1668"). InnerVolumeSpecName "kube-api-access-8kln4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:07:04 functional-693000 kubelet[6430]: I0723 14:07:04.515088    6430 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1d2d4e65-c52e-46a3-b61b-0c16460d1668-test-volume\") on node \"functional-693000\" DevicePath \"\""
	Jul 23 14:07:04 functional-693000 kubelet[6430]: I0723 14:07:04.515103    6430 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8kln4\" (UniqueName: \"kubernetes.io/projected/1d2d4e65-c52e-46a3-b61b-0c16460d1668-kube-api-access-8kln4\") on node \"functional-693000\" DevicePath \"\""
	Jul 23 14:07:05 functional-693000 kubelet[6430]: I0723 14:07:05.166218    6430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba979b1894d22b457826503fc1dbf408a872c66f9cc71830489a93abeeb876a0"
	
	
	==> storage-provisioner [87cdba4036e4] <==
	I0723 14:05:29.163428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0723 14:05:29.164469       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [cd1beec5e857] <==
	I0723 14:05:44.657237       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 14:05:44.664112       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 14:05:44.664219       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 14:06:02.049111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 14:06:02.049217       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-693000_686baba0-3773-4b39-8d55-d9bb9a906cad!
	I0723 14:06:02.049647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"149d47b7-147c-40a7-a373-e32041e20ee0", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-693000_686baba0-3773-4b39-8d55-d9bb9a906cad became leader
	I0723 14:06:02.150018       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-693000_686baba0-3773-4b39-8d55-d9bb9a906cad!
	I0723 14:06:38.922259       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0723 14:06:38.922405       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    6a1224ff-1e5c-490f-8c2a-a3413203d3a8 342 0 2024-07-23 14:04:21 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-23 14:04:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-eb5986ef-dbca-48b1-8c33-088a39e98adf &PersistentVolumeClaim{ObjectMeta:{myclaim  default  eb5986ef-dbca-48b1-8c33-088a39e98adf 740 0 2024-07-23 14:06:38 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-23 14:06:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-23 14:06:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0723 14:06:38.923021       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-eb5986ef-dbca-48b1-8c33-088a39e98adf" provisioned
	I0723 14:06:38.923106       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0723 14:06:38.923154       1 volume_store.go:212] Trying to save persistentvolume "pvc-eb5986ef-dbca-48b1-8c33-088a39e98adf"
	I0723 14:06:38.925261       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"eb5986ef-dbca-48b1-8c33-088a39e98adf", APIVersion:"v1", ResourceVersion:"740", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0723 14:06:38.929087       1 volume_store.go:219] persistentvolume "pvc-eb5986ef-dbca-48b1-8c33-088a39e98adf" saved
	I0723 14:06:38.930334       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"eb5986ef-dbca-48b1-8c33-088a39e98adf", APIVersion:"v1", ResourceVersion:"740", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-eb5986ef-dbca-48b1-8c33-088a39e98adf
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-693000 -n functional-693000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-693000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-693000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-693000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-693000/192.168.105.4
	Start Time:       Tue, 23 Jul 2024 07:07:01 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://6935a45860b4f68159d6b99d925edefbb501e1ba4642eb7973aef0c91ab39572
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 23 Jul 2024 07:07:02 -0700
	      Finished:     Tue, 23 Jul 2024 07:07:03 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8kln4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8kln4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5s    default-scheduler  Successfully assigned default/busybox-mount to functional-693000
	  Normal  Pulling    6s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.009s (1.009s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5s    kubelet            Created container mount-munger
	  Normal  Started    5s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 node stop m02 -v=7 --alsologtostderr
E0723 07:11:41.790408    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-023000 node stop m02 -v=7 --alsologtostderr: (12.185866917s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr
E0723 07:12:02.272016    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:12:43.233568    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:14:05.154371    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr: exit status 7 (2m55.969724291s)

                                                
                                                
-- stdout --
	ha-023000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-023000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-023000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:11:47.846899    3624 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:11:47.847055    3624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:11:47.847059    3624 out.go:304] Setting ErrFile to fd 2...
	I0723 07:11:47.847062    3624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:11:47.847184    3624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:11:47.847304    3624 out.go:298] Setting JSON to false
	I0723 07:11:47.847321    3624 mustload.go:65] Loading cluster: ha-023000
	I0723 07:11:47.847430    3624 notify.go:220] Checking for updates...
	I0723 07:11:47.847543    3624 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:11:47.847551    3624 status.go:255] checking status of ha-023000 ...
	I0723 07:11:47.848256    3624 status.go:330] ha-023000 host status = "Running" (err=<nil>)
	I0723 07:11:47.848263    3624 host.go:66] Checking if "ha-023000" exists ...
	I0723 07:11:47.848351    3624 host.go:66] Checking if "ha-023000" exists ...
	I0723 07:11:47.848455    3624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 07:11:47.848465    3624 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/id_rsa Username:docker}
	W0723 07:12:13.775853    3624 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0723 07:12:13.776002    3624 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0723 07:12:13.776032    3624 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0723 07:12:13.776041    3624 status.go:257] ha-023000 status: &{Name:ha-023000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 07:12:13.776066    3624 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0723 07:12:13.776077    3624 status.go:255] checking status of ha-023000-m02 ...
	I0723 07:12:13.776494    3624 status.go:330] ha-023000-m02 host status = "Stopped" (err=<nil>)
	I0723 07:12:13.776504    3624 status.go:343] host is not running, skipping remaining checks
	I0723 07:12:13.776509    3624 status.go:257] ha-023000-m02 status: &{Name:ha-023000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 07:12:13.776521    3624 status.go:255] checking status of ha-023000-m03 ...
	I0723 07:12:13.777680    3624 status.go:330] ha-023000-m03 host status = "Running" (err=<nil>)
	I0723 07:12:13.777691    3624 host.go:66] Checking if "ha-023000-m03" exists ...
	I0723 07:12:13.777838    3624 host.go:66] Checking if "ha-023000-m03" exists ...
	I0723 07:12:13.777965    3624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 07:12:13.777974    3624 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m03/id_rsa Username:docker}
	W0723 07:13:28.779356    3624 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0723 07:13:28.779455    3624 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0723 07:13:28.779467    3624 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0723 07:13:28.779472    3624 status.go:257] ha-023000-m03 status: &{Name:ha-023000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 07:13:28.779482    3624 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0723 07:13:28.779488    3624 status.go:255] checking status of ha-023000-m04 ...
	I0723 07:13:28.780259    3624 status.go:330] ha-023000-m04 host status = "Running" (err=<nil>)
	I0723 07:13:28.780268    3624 host.go:66] Checking if "ha-023000-m04" exists ...
	I0723 07:13:28.780372    3624 host.go:66] Checking if "ha-023000-m04" exists ...
	I0723 07:13:28.780496    3624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 07:13:28.780506    3624 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m04/id_rsa Username:docker}
	W0723 07:14:43.781330    3624 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0723 07:14:43.781376    3624 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0723 07:14:43.781386    3624 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0723 07:14:43.781390    3624 status.go:257] ha-023000-m04 status: &{Name:ha-023000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0723 07:14:43.781399    3624 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr": ha-023000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-023000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-023000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr": ha-023000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-023000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-023000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr": ha-023000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-023000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-023000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
E0723 07:14:46.577841    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 3 (25.963377125s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 07:15:09.744602    3682 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0723 07:15:09.744611    3682 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0723 07:16:21.286849    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.043007292s)
ha_test.go:413: expected profile "ha-023000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-023000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-023000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-023000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
E0723 07:16:48.989160    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 3 (25.9579325s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 07:16:52.735198    3726 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0723 07:16:52.735260    3726 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (182.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-023000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.124434709s)

                                                
                                                
-- stdout --
	* Starting "ha-023000-m02" control-plane node in "ha-023000" cluster
	* Restarting existing qemu2 VM for "ha-023000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-023000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:16:52.808978    3735 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:16:52.809293    3735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:16:52.809300    3735 out.go:304] Setting ErrFile to fd 2...
	I0723 07:16:52.809303    3735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:16:52.809484    3735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:16:52.809787    3735 mustload.go:65] Loading cluster: ha-023000
	I0723 07:16:52.810075    3735 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0723 07:16:52.810371    3735 host.go:58] "ha-023000-m02" host status: Stopped
	I0723 07:16:52.814468    3735 out.go:177] * Starting "ha-023000-m02" control-plane node in "ha-023000" cluster
	I0723 07:16:52.817661    3735 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:16:52.817684    3735 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:16:52.817698    3735 cache.go:56] Caching tarball of preloaded images
	I0723 07:16:52.817794    3735 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:16:52.817800    3735 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:16:52.817876    3735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/ha-023000/config.json ...
	I0723 07:16:52.818537    3735 start.go:360] acquireMachinesLock for ha-023000-m02: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:16:52.818587    3735 start.go:364] duration metric: took 34.708µs to acquireMachinesLock for "ha-023000-m02"
	I0723 07:16:52.818598    3735 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:16:52.818603    3735 fix.go:54] fixHost starting: m02
	I0723 07:16:52.818772    3735 fix.go:112] recreateIfNeeded on ha-023000-m02: state=Stopped err=<nil>
	W0723 07:16:52.818779    3735 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:16:52.823675    3735 out.go:177] * Restarting existing qemu2 VM for "ha-023000-m02" ...
	I0723 07:16:52.827613    3735 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:16:52.827676    3735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:43:9d:9b:71:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/disk.qcow2
	I0723 07:16:52.830462    3735 main.go:141] libmachine: STDOUT: 
	I0723 07:16:52.830490    3735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:16:52.830518    3735 fix.go:56] duration metric: took 11.914708ms for fixHost
	I0723 07:16:52.830523    3735 start.go:83] releasing machines lock for "ha-023000-m02", held for 11.931916ms
	W0723 07:16:52.830530    3735 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:16:52.830558    3735 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:16:52.830563    3735 start.go:729] Will try again in 5 seconds ...
	I0723 07:16:57.832352    3735 start.go:360] acquireMachinesLock for ha-023000-m02: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:16:57.832479    3735 start.go:364] duration metric: took 96.416µs to acquireMachinesLock for "ha-023000-m02"
	I0723 07:16:57.832509    3735 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:16:57.832513    3735 fix.go:54] fixHost starting: m02
	I0723 07:16:57.832661    3735 fix.go:112] recreateIfNeeded on ha-023000-m02: state=Stopped err=<nil>
	W0723 07:16:57.832666    3735 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:16:57.836877    3735 out.go:177] * Restarting existing qemu2 VM for "ha-023000-m02" ...
	I0723 07:16:57.844265    3735 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:16:57.844307    3735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:43:9d:9b:71:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/disk.qcow2
	I0723 07:16:57.846152    3735 main.go:141] libmachine: STDOUT: 
	I0723 07:16:57.846170    3735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:16:57.846193    3735 fix.go:56] duration metric: took 13.681833ms for fixHost
	I0723 07:16:57.846197    3735 start.go:83] releasing machines lock for "ha-023000-m02", held for 13.714709ms
	W0723 07:16:57.846241    3735 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:16:57.849922    3735 out.go:177] 
	W0723 07:16:57.853944    3735 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:16:57.853949    3735 out.go:239] * 
	* 
	W0723 07:16:57.855574    3735 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:16:57.859970    3735 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0723 07:16:52.808978    3735 out.go:291] Setting OutFile to fd 1 ...
I0723 07:16:52.809293    3735 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:16:52.809300    3735 out.go:304] Setting ErrFile to fd 2...
I0723 07:16:52.809303    3735 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:16:52.809484    3735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
I0723 07:16:52.809787    3735 mustload.go:65] Loading cluster: ha-023000
I0723 07:16:52.810075    3735 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0723 07:16:52.810371    3735 host.go:58] "ha-023000-m02" host status: Stopped
I0723 07:16:52.814468    3735 out.go:177] * Starting "ha-023000-m02" control-plane node in "ha-023000" cluster
I0723 07:16:52.817661    3735 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0723 07:16:52.817684    3735 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0723 07:16:52.817698    3735 cache.go:56] Caching tarball of preloaded images
I0723 07:16:52.817794    3735 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0723 07:16:52.817800    3735 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0723 07:16:52.817876    3735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/ha-023000/config.json ...
I0723 07:16:52.818537    3735 start.go:360] acquireMachinesLock for ha-023000-m02: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0723 07:16:52.818587    3735 start.go:364] duration metric: took 34.708µs to acquireMachinesLock for "ha-023000-m02"
I0723 07:16:52.818598    3735 start.go:96] Skipping create...Using existing machine configuration
I0723 07:16:52.818603    3735 fix.go:54] fixHost starting: m02
I0723 07:16:52.818772    3735 fix.go:112] recreateIfNeeded on ha-023000-m02: state=Stopped err=<nil>
W0723 07:16:52.818779    3735 fix.go:138] unexpected machine state, will restart: <nil>
I0723 07:16:52.823675    3735 out.go:177] * Restarting existing qemu2 VM for "ha-023000-m02" ...
I0723 07:16:52.827613    3735 qemu.go:418] Using hvf for hardware acceleration
I0723 07:16:52.827676    3735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:43:9d:9b:71:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/disk.qcow2
I0723 07:16:52.830462    3735 main.go:141] libmachine: STDOUT: 
I0723 07:16:52.830490    3735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0723 07:16:52.830518    3735 fix.go:56] duration metric: took 11.914708ms for fixHost
I0723 07:16:52.830523    3735 start.go:83] releasing machines lock for "ha-023000-m02", held for 11.931916ms
W0723 07:16:52.830530    3735 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0723 07:16:52.830558    3735 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0723 07:16:52.830563    3735 start.go:729] Will try again in 5 seconds ...
I0723 07:16:57.832352    3735 start.go:360] acquireMachinesLock for ha-023000-m02: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0723 07:16:57.832479    3735 start.go:364] duration metric: took 96.416µs to acquireMachinesLock for "ha-023000-m02"
I0723 07:16:57.832509    3735 start.go:96] Skipping create...Using existing machine configuration
I0723 07:16:57.832513    3735 fix.go:54] fixHost starting: m02
I0723 07:16:57.832661    3735 fix.go:112] recreateIfNeeded on ha-023000-m02: state=Stopped err=<nil>
W0723 07:16:57.832666    3735 fix.go:138] unexpected machine state, will restart: <nil>
I0723 07:16:57.836877    3735 out.go:177] * Restarting existing qemu2 VM for "ha-023000-m02" ...
I0723 07:16:57.844265    3735 qemu.go:418] Using hvf for hardware acceleration
I0723 07:16:57.844307    3735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:43:9d:9b:71:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m02/disk.qcow2
I0723 07:16:57.846152    3735 main.go:141] libmachine: STDOUT: 
I0723 07:16:57.846170    3735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0723 07:16:57.846193    3735 fix.go:56] duration metric: took 13.681833ms for fixHost
I0723 07:16:57.846197    3735 start.go:83] releasing machines lock for "ha-023000-m02", held for 13.714709ms
W0723 07:16:57.846241    3735 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0723 07:16:57.849922    3735 out.go:177] 
W0723 07:16:57.853944    3735 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0723 07:16:57.853949    3735 out.go:239] * 
* 
W0723 07:16:57.855574    3735 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0723 07:16:57.859970    3735 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-023000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr: exit status 7 (2m31.755855792s)

                                                
                                                
-- stdout --
	ha-023000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-023000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-023000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:16:57.894885    3743 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:16:57.895039    3743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:16:57.895043    3743 out.go:304] Setting ErrFile to fd 2...
	I0723 07:16:57.895045    3743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:16:57.895169    3743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:16:57.895288    3743 out.go:298] Setting JSON to false
	I0723 07:16:57.895307    3743 mustload.go:65] Loading cluster: ha-023000
	I0723 07:16:57.895356    3743 notify.go:220] Checking for updates...
	I0723 07:16:57.895527    3743 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:16:57.895535    3743 status.go:255] checking status of ha-023000 ...
	I0723 07:16:57.896206    3743 status.go:330] ha-023000 host status = "Running" (err=<nil>)
	I0723 07:16:57.896215    3743 host.go:66] Checking if "ha-023000" exists ...
	I0723 07:16:57.896314    3743 host.go:66] Checking if "ha-023000" exists ...
	I0723 07:16:57.896430    3743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 07:16:57.896441    3743 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/id_rsa Username:docker}
	W0723 07:16:57.896612    3743 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0723 07:16:57.896631    3743 retry.go:31] will retry after 202.238875ms: dial tcp 192.168.105.5:22: connect: host is down
	W0723 07:16:58.101057    3743 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0723 07:16:58.101090    3743 retry.go:31] will retry after 346.641233ms: dial tcp 192.168.105.5:22: connect: host is down
	W0723 07:16:58.448935    3743 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0723 07:16:58.448957    3743 retry.go:31] will retry after 294.979726ms: dial tcp 192.168.105.5:22: connect: host is down
	W0723 07:16:58.745317    3743 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0723 07:16:58.745333    3743 retry.go:31] will retry after 864.363625ms: dial tcp 192.168.105.5:22: connect: host is down
	W0723 07:16:59.611896    3743 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	W0723 07:16:59.611984    3743 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	E0723 07:16:59.612000    3743 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0723 07:16:59.612004    3743 status.go:257] ha-023000 status: &{Name:ha-023000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 07:16:59.612016    3743 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0723 07:16:59.612019    3743 status.go:255] checking status of ha-023000-m02 ...
	I0723 07:16:59.612214    3743 status.go:330] ha-023000-m02 host status = "Stopped" (err=<nil>)
	I0723 07:16:59.612219    3743 status.go:343] host is not running, skipping remaining checks
	I0723 07:16:59.612222    3743 status.go:257] ha-023000-m02 status: &{Name:ha-023000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 07:16:59.612226    3743 status.go:255] checking status of ha-023000-m03 ...
	I0723 07:16:59.612817    3743 status.go:330] ha-023000-m03 host status = "Running" (err=<nil>)
	I0723 07:16:59.612822    3743 host.go:66] Checking if "ha-023000-m03" exists ...
	I0723 07:16:59.612934    3743 host.go:66] Checking if "ha-023000-m03" exists ...
	I0723 07:16:59.613063    3743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 07:16:59.613069    3743 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m03/id_rsa Username:docker}
	W0723 07:18:14.613641    3743 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0723 07:18:14.613774    3743 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0723 07:18:14.613816    3743 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0723 07:18:14.613827    3743 status.go:257] ha-023000-m03 status: &{Name:ha-023000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 07:18:14.613847    3743 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0723 07:18:14.613862    3743 status.go:255] checking status of ha-023000-m04 ...
	I0723 07:18:14.615653    3743 status.go:330] ha-023000-m04 host status = "Running" (err=<nil>)
	I0723 07:18:14.615672    3743 host.go:66] Checking if "ha-023000-m04" exists ...
	I0723 07:18:14.615974    3743 host.go:66] Checking if "ha-023000-m04" exists ...
	I0723 07:18:14.616292    3743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 07:18:14.616307    3743 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000-m04/id_rsa Username:docker}
	W0723 07:19:29.616330    3743 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0723 07:19:29.616375    3743 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0723 07:19:29.616383    3743 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0723 07:19:29.616386    3743 status.go:257] ha-023000-m04 status: &{Name:ha-023000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0723 07:19:29.616442    3743 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
E0723 07:19:46.566967    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 3 (25.957088417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 07:19:55.569871    3796 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0723 07:19:55.569906    3796 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (182.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-023000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-023000 -v=7 --alsologtostderr
E0723 07:21:21.276833    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:24:46.561012    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-023000 -v=7 --alsologtostderr: (3m49.028260792s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-023000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-023000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229700417s)

                                                
                                                
-- stdout --
	* [ha-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-023000" primary control-plane node in "ha-023000" cluster
	* Restarting existing qemu2 VM for "ha-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:25:01.567002    3926 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:25:01.567186    3926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:25:01.567190    3926 out.go:304] Setting ErrFile to fd 2...
	I0723 07:25:01.567194    3926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:25:01.567350    3926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:25:01.568672    3926 out.go:298] Setting JSON to false
	I0723 07:25:01.589250    3926 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3265,"bootTime":1721741436,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:25:01.589332    3926 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:25:01.594945    3926 out.go:177] * [ha-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:25:01.602943    3926 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:25:01.603005    3926 notify.go:220] Checking for updates...
	I0723 07:25:01.609885    3926 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:25:01.612927    3926 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:25:01.615978    3926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:25:01.617209    3926 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:25:01.623925    3926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:25:01.627233    3926 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:25:01.627291    3926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:25:01.631941    3926 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:25:01.639764    3926 start.go:297] selected driver: qemu2
	I0723 07:25:01.639771    3926 start.go:901] validating driver "qemu2" against &{Name:ha-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-023000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:25:01.639841    3926 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:25:01.642657    3926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:25:01.642684    3926 cni.go:84] Creating CNI manager for ""
	I0723 07:25:01.642692    3926 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0723 07:25:01.642748    3926 start.go:340] cluster config:
	{Name:ha-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-023000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:25:01.647320    3926 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:25:01.655955    3926 out.go:177] * Starting "ha-023000" primary control-plane node in "ha-023000" cluster
	I0723 07:25:01.659921    3926 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:25:01.659936    3926 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:25:01.659955    3926 cache.go:56] Caching tarball of preloaded images
	I0723 07:25:01.660024    3926 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:25:01.660029    3926 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:25:01.660107    3926 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/ha-023000/config.json ...
	I0723 07:25:01.660571    3926 start.go:360] acquireMachinesLock for ha-023000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:25:01.660610    3926 start.go:364] duration metric: took 32.833µs to acquireMachinesLock for "ha-023000"
	I0723 07:25:01.660624    3926 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:25:01.660631    3926 fix.go:54] fixHost starting: 
	I0723 07:25:01.660755    3926 fix.go:112] recreateIfNeeded on ha-023000: state=Stopped err=<nil>
	W0723 07:25:01.660764    3926 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:25:01.663942    3926 out.go:177] * Restarting existing qemu2 VM for "ha-023000" ...
	I0723 07:25:01.671950    3926 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:25:01.671994    3926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:1e:9f:de:dd:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/disk.qcow2
	I0723 07:25:01.674036    3926 main.go:141] libmachine: STDOUT: 
	I0723 07:25:01.674054    3926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:25:01.674082    3926 fix.go:56] duration metric: took 13.451167ms for fixHost
	I0723 07:25:01.674088    3926 start.go:83] releasing machines lock for "ha-023000", held for 13.473292ms
	W0723 07:25:01.674094    3926 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:25:01.674128    3926 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:25:01.674134    3926 start.go:729] Will try again in 5 seconds ...
	I0723 07:25:06.676249    3926 start.go:360] acquireMachinesLock for ha-023000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:25:06.676650    3926 start.go:364] duration metric: took 331.084µs to acquireMachinesLock for "ha-023000"
	I0723 07:25:06.676756    3926 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:25:06.676774    3926 fix.go:54] fixHost starting: 
	I0723 07:25:06.677486    3926 fix.go:112] recreateIfNeeded on ha-023000: state=Stopped err=<nil>
	W0723 07:25:06.677513    3926 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:25:06.681904    3926 out.go:177] * Restarting existing qemu2 VM for "ha-023000" ...
	I0723 07:25:06.690854    3926 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:25:06.691065    3926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:1e:9f:de:dd:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/disk.qcow2
	I0723 07:25:06.699933    3926 main.go:141] libmachine: STDOUT: 
	I0723 07:25:06.699988    3926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:25:06.700049    3926 fix.go:56] duration metric: took 23.27625ms for fixHost
	I0723 07:25:06.700069    3926 start.go:83] releasing machines lock for "ha-023000", held for 23.394ms
	W0723 07:25:06.700236    3926 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:25:06.707841    3926 out.go:177] 
	W0723 07:25:06.711932    3926 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:25:06.711953    3926 out.go:239] * 
	* 
	W0723 07:25:06.714454    3926 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:25:06.724845    3926 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-023000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-023000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 7 (33.051125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-023000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.341667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-023000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-023000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:25:06.862519    3941 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:25:06.862796    3941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:25:06.862800    3941 out.go:304] Setting ErrFile to fd 2...
	I0723 07:25:06.862802    3941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:25:06.862928    3941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:25:06.863133    3941 mustload.go:65] Loading cluster: ha-023000
	I0723 07:25:06.863352    3941 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0723 07:25:06.863647    3941 out.go:239] ! The control-plane node ha-023000 host is not running (will try others): state=Stopped
	! The control-plane node ha-023000 host is not running (will try others): state=Stopped
	W0723 07:25:06.863767    3941 out.go:239] ! The control-plane node ha-023000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-023000-m02 host is not running (will try others): state=Stopped
	I0723 07:25:06.868500    3941 out.go:177] * The control-plane node ha-023000-m03 host is not running: state=Stopped
	I0723 07:25:06.871524    3941 out.go:177]   To start a cluster, run: "minikube start -p ha-023000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-023000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr: exit status 7 (29.430875ms)

                                                
                                                
-- stdout --
	ha-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:25:06.902839    3943 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:25:06.902998    3943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:25:06.903001    3943 out.go:304] Setting ErrFile to fd 2...
	I0723 07:25:06.903003    3943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:25:06.903118    3943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:25:06.903237    3943 out.go:298] Setting JSON to false
	I0723 07:25:06.903247    3943 mustload.go:65] Loading cluster: ha-023000
	I0723 07:25:06.903319    3943 notify.go:220] Checking for updates...
	I0723 07:25:06.903461    3943 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:25:06.903470    3943 status.go:255] checking status of ha-023000 ...
	I0723 07:25:06.903692    3943 status.go:330] ha-023000 host status = "Stopped" (err=<nil>)
	I0723 07:25:06.903696    3943 status.go:343] host is not running, skipping remaining checks
	I0723 07:25:06.903698    3943 status.go:257] ha-023000 status: &{Name:ha-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 07:25:06.903707    3943 status.go:255] checking status of ha-023000-m02 ...
	I0723 07:25:06.903800    3943 status.go:330] ha-023000-m02 host status = "Stopped" (err=<nil>)
	I0723 07:25:06.903803    3943 status.go:343] host is not running, skipping remaining checks
	I0723 07:25:06.903804    3943 status.go:257] ha-023000-m02 status: &{Name:ha-023000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 07:25:06.903810    3943 status.go:255] checking status of ha-023000-m03 ...
	I0723 07:25:06.903895    3943 status.go:330] ha-023000-m03 host status = "Stopped" (err=<nil>)
	I0723 07:25:06.903898    3943 status.go:343] host is not running, skipping remaining checks
	I0723 07:25:06.903900    3943 status.go:257] ha-023000-m03 status: &{Name:ha-023000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 07:25:06.903903    3943 status.go:255] checking status of ha-023000-m04 ...
	I0723 07:25:06.904001    3943 status.go:330] ha-023000-m04 host status = "Stopped" (err=<nil>)
	I0723 07:25:06.904003    3943 status.go:343] host is not running, skipping remaining checks
	I0723 07:25:06.904005    3943 status.go:257] ha-023000-m04 status: &{Name:ha-023000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 7 (29.740625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.09357625s)
ha_test.go:413: expected profile "ha-023000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-023000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-023000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-023000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 7 (55.965917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 stop -v=7 --alsologtostderr
E0723 07:26:21.271728    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:27:44.339052    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-023000 stop -v=7 --alsologtostderr: (3m21.983689916s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr: exit status 7 (63.865209ms)

                                                
                                                
-- stdout --
	ha-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-023000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:28:30.126419    4035 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:28:30.126645    4035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:28:30.126650    4035 out.go:304] Setting ErrFile to fd 2...
	I0723 07:28:30.126652    4035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:28:30.126841    4035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:28:30.127025    4035 out.go:298] Setting JSON to false
	I0723 07:28:30.127037    4035 mustload.go:65] Loading cluster: ha-023000
	I0723 07:28:30.127074    4035 notify.go:220] Checking for updates...
	I0723 07:28:30.127334    4035 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:28:30.127346    4035 status.go:255] checking status of ha-023000 ...
	I0723 07:28:30.127660    4035 status.go:330] ha-023000 host status = "Stopped" (err=<nil>)
	I0723 07:28:30.127665    4035 status.go:343] host is not running, skipping remaining checks
	I0723 07:28:30.127668    4035 status.go:257] ha-023000 status: &{Name:ha-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 07:28:30.127681    4035 status.go:255] checking status of ha-023000-m02 ...
	I0723 07:28:30.127810    4035 status.go:330] ha-023000-m02 host status = "Stopped" (err=<nil>)
	I0723 07:28:30.127814    4035 status.go:343] host is not running, skipping remaining checks
	I0723 07:28:30.127817    4035 status.go:257] ha-023000-m02 status: &{Name:ha-023000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 07:28:30.127822    4035 status.go:255] checking status of ha-023000-m03 ...
	I0723 07:28:30.127951    4035 status.go:330] ha-023000-m03 host status = "Stopped" (err=<nil>)
	I0723 07:28:30.127957    4035 status.go:343] host is not running, skipping remaining checks
	I0723 07:28:30.127959    4035 status.go:257] ha-023000-m03 status: &{Name:ha-023000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 07:28:30.127965    4035 status.go:255] checking status of ha-023000-m04 ...
	I0723 07:28:30.128111    4035 status.go:330] ha-023000-m04 host status = "Stopped" (err=<nil>)
	I0723 07:28:30.128116    4035 status.go:343] host is not running, skipping remaining checks
	I0723 07:28:30.128118    4035 status.go:257] ha-023000-m04 status: &{Name:ha-023000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr": ha-023000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr": ha-023000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr": ha-023000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-023000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 7 (32.344709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-023000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-023000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.177275625s)

                                                
                                                
-- stdout --
	* [ha-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-023000" primary control-plane node in "ha-023000" cluster
	* Restarting existing qemu2 VM for "ha-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:28:30.188635    4039 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:28:30.188761    4039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:28:30.188765    4039 out.go:304] Setting ErrFile to fd 2...
	I0723 07:28:30.188767    4039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:28:30.188893    4039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:28:30.189906    4039 out.go:298] Setting JSON to false
	I0723 07:28:30.205885    4039 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3474,"bootTime":1721741436,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:28:30.205951    4039 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:28:30.211155    4039 out.go:177] * [ha-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:28:30.218234    4039 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:28:30.218298    4039 notify.go:220] Checking for updates...
	I0723 07:28:30.224156    4039 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:28:30.227142    4039 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:28:30.230117    4039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:28:30.233070    4039 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:28:30.236148    4039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:28:30.239395    4039 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:28:30.239645    4039 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:28:30.244066    4039 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:28:30.250044    4039 start.go:297] selected driver: qemu2
	I0723 07:28:30.250052    4039 start.go:901] validating driver "qemu2" against &{Name:ha-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-023000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:28:30.250141    4039 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:28:30.252358    4039 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:28:30.252379    4039 cni.go:84] Creating CNI manager for ""
	I0723 07:28:30.252383    4039 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0723 07:28:30.252432    4039 start.go:340] cluster config:
	{Name:ha-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-023000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:28:30.255849    4039 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:28:30.267634    4039 out.go:177] * Starting "ha-023000" primary control-plane node in "ha-023000" cluster
	I0723 07:28:30.272090    4039 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:28:30.272105    4039 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:28:30.272115    4039 cache.go:56] Caching tarball of preloaded images
	I0723 07:28:30.272185    4039 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:28:30.272192    4039 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:28:30.272268    4039 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/ha-023000/config.json ...
	I0723 07:28:30.272705    4039 start.go:360] acquireMachinesLock for ha-023000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:28:30.272739    4039 start.go:364] duration metric: took 28.416µs to acquireMachinesLock for "ha-023000"
	I0723 07:28:30.272749    4039 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:28:30.272755    4039 fix.go:54] fixHost starting: 
	I0723 07:28:30.272873    4039 fix.go:112] recreateIfNeeded on ha-023000: state=Stopped err=<nil>
	W0723 07:28:30.272881    4039 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:28:30.277111    4039 out.go:177] * Restarting existing qemu2 VM for "ha-023000" ...
	I0723 07:28:30.285072    4039 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:28:30.285111    4039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:1e:9f:de:dd:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/disk.qcow2
	I0723 07:28:30.287063    4039 main.go:141] libmachine: STDOUT: 
	I0723 07:28:30.287084    4039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:28:30.287113    4039 fix.go:56] duration metric: took 14.356709ms for fixHost
	I0723 07:28:30.287118    4039 start.go:83] releasing machines lock for "ha-023000", held for 14.374875ms
	W0723 07:28:30.287124    4039 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:28:30.287177    4039 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:28:30.287182    4039 start.go:729] Will try again in 5 seconds ...
	I0723 07:28:35.289307    4039 start.go:360] acquireMachinesLock for ha-023000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:28:35.289723    4039 start.go:364] duration metric: took 325.875µs to acquireMachinesLock for "ha-023000"
	I0723 07:28:35.289861    4039 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:28:35.289881    4039 fix.go:54] fixHost starting: 
	I0723 07:28:35.290650    4039 fix.go:112] recreateIfNeeded on ha-023000: state=Stopped err=<nil>
	W0723 07:28:35.290677    4039 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:28:35.295147    4039 out.go:177] * Restarting existing qemu2 VM for "ha-023000" ...
	I0723 07:28:35.298926    4039 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:28:35.299153    4039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:1e:9f:de:dd:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/ha-023000/disk.qcow2
	I0723 07:28:35.307934    4039 main.go:141] libmachine: STDOUT: 
	I0723 07:28:35.308011    4039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:28:35.308079    4039 fix.go:56] duration metric: took 18.199875ms for fixHost
	I0723 07:28:35.308097    4039 start.go:83] releasing machines lock for "ha-023000", held for 18.3535ms
	W0723 07:28:35.308251    4039 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:28:35.315027    4039 out.go:177] 
	W0723 07:28:35.318038    4039 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:28:35.318085    4039 out.go:239] * 
	* 
	W0723 07:28:35.320415    4039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:28:35.331019    4039 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-023000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 7 (68.83625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-023000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-023000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-023000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-023000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 7 (29.307833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-023000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-023000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.349375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-023000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-023000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:28:35.516532    4054 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:28:35.516664    4054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:28:35.516667    4054 out.go:304] Setting ErrFile to fd 2...
	I0723 07:28:35.516670    4054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:28:35.516789    4054 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:28:35.517024    4054 mustload.go:65] Loading cluster: ha-023000
	I0723 07:28:35.517228    4054 config.go:182] Loaded profile config "ha-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0723 07:28:35.517522    4054 out.go:239] ! The control-plane node ha-023000 host is not running (will try others): state=Stopped
	! The control-plane node ha-023000 host is not running (will try others): state=Stopped
	W0723 07:28:35.517621    4054 out.go:239] ! The control-plane node ha-023000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-023000-m02 host is not running (will try others): state=Stopped
	I0723 07:28:35.521018    4054 out.go:177] * The control-plane node ha-023000-m03 host is not running: state=Stopped
	I0723 07:28:35.524893    4054 out.go:177]   To start a cluster, run: "minikube start -p ha-023000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-023000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-023000 -n ha-023000: exit status 7 (28.972167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-881000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-881000 --driver=qemu2 : exit status 80 (10.086686375s)

                                                
                                                
-- stdout --
	* [image-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-881000" primary control-plane node in "image-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-881000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-881000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-881000 -n image-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-881000 -n image-881000: exit status 7 (68.262584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.16s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-320000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-320000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.751222167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d53d2119-a13d-4c9f-8467-847a81db7df6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-320000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"796838ae-0c95-4924-aa83-52e5b4c93f42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19319"}}
	{"specversion":"1.0","id":"4f01009b-44a6-42c7-bb3e-d6d48506aebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig"}}
	{"specversion":"1.0","id":"71062208-dcd8-4dd8-a84a-b81b23bdc247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"55e32796-57ec-4d27-9f95-0d9bbc81d6cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2d25e07a-feea-4da9-a12c-3d425cbcb72d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube"}}
	{"specversion":"1.0","id":"2fbea67d-e177-4755-a357-2f9744ed0d5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bdb7b3c8-022f-4d82-949a-072c9880b90e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9194c26-3774-4b3e-9db2-9434bbf3a692","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6623130c-4ad2-482b-a41f-66c93cb34f9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-320000\" primary control-plane node in \"json-output-320000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"de3f03ab-e14e-4795-8920-cbc052822b89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"7ee57b12-28f8-4b8b-b611-44e687be4077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-320000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b3f2b27-fa05-4cbf-8340-abedff515540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"7968415e-bc03-4dee-8a3f-b18bbc64f30d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"18bf6bce-82ba-48ad-ac87-5bc3299cb8ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-320000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6ccaf97d-09e0-4200-ad29-d4a723f99585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"c0a1a339-60ef-44b0-b4e1-c03c07bdf034","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-320000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-320000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-320000 --output=json --user=testUser: exit status 83 (78.069375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c950f1ae-799f-4d3c-85d6-6879c16993bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-320000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"db9abbf9-7c10-4b45-9de7-2598e64cd243","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-320000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-320000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-320000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-320000 --output=json --user=testUser: exit status 83 (42.106166ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-320000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-320000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-320000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-320000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-112000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-112000 --driver=qemu2 : exit status 80 (9.914160167s)

                                                
                                                
-- stdout --
	* [first-112000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-112000" primary control-plane node in "first-112000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-112000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-112000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-112000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-23 07:29:09.75171 -0700 PDT m=+2012.393409792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-114000 -n second-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-114000 -n second-114000: exit status 85 (86.305084ms)

                                                
                                                
-- stdout --
	* Profile "second-114000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-114000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-114000" host is not running, skipping log retrieval (state="* Profile \"second-114000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-114000\"")
helpers_test.go:175: Cleaning up "second-114000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-114000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-23 07:29:09.939984 -0700 PDT m=+2012.581687667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-112000 -n first-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-112000 -n first-112000: exit status 7 (28.941292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-112000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-112000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-112000
--- FAIL: TestMinikubeProfile (10.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-034000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-034000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.029841542s)

                                                
                                                
-- stdout --
	* [mount-start-1-034000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-034000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-034000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-034000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-034000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-034000 -n mount-start-1-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-034000 -n mount-start-1-034000: exit status 7 (69.453417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-034000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-887000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-887000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.817777s)

                                                
                                                
-- stdout --
	* [multinode-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-887000" primary control-plane node in "multinode-887000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-887000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:29:20.350936    4208 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:29:20.351089    4208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:29:20.351093    4208 out.go:304] Setting ErrFile to fd 2...
	I0723 07:29:20.351095    4208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:29:20.351230    4208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:29:20.352281    4208 out.go:298] Setting JSON to false
	I0723 07:29:20.368349    4208 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3524,"bootTime":1721741436,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:29:20.368422    4208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:29:20.375018    4208 out.go:177] * [multinode-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:29:20.382929    4208 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:29:20.383024    4208 notify.go:220] Checking for updates...
	I0723 07:29:20.389947    4208 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:29:20.392898    4208 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:29:20.395968    4208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:29:20.398849    4208 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:29:20.401883    4208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:29:20.405131    4208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:29:20.408866    4208 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:29:20.415966    4208 start.go:297] selected driver: qemu2
	I0723 07:29:20.415972    4208 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:29:20.415979    4208 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:29:20.418230    4208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:29:20.420934    4208 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:29:20.424006    4208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:29:20.424034    4208 cni.go:84] Creating CNI manager for ""
	I0723 07:29:20.424039    4208 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0723 07:29:20.424042    4208 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 07:29:20.424072    4208 start.go:340] cluster config:
	{Name:multinode-887000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:29:20.427834    4208 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:29:20.435924    4208 out.go:177] * Starting "multinode-887000" primary control-plane node in "multinode-887000" cluster
	I0723 07:29:20.439945    4208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:29:20.439961    4208 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:29:20.439971    4208 cache.go:56] Caching tarball of preloaded images
	I0723 07:29:20.440040    4208 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:29:20.440045    4208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:29:20.440251    4208 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/multinode-887000/config.json ...
	I0723 07:29:20.440263    4208 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/multinode-887000/config.json: {Name:mkdb13ca6912477809eb7624bc203dc98040057f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:29:20.440486    4208 start.go:360] acquireMachinesLock for multinode-887000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:29:20.440523    4208 start.go:364] duration metric: took 30.666µs to acquireMachinesLock for "multinode-887000"
	I0723 07:29:20.440535    4208 start.go:93] Provisioning new machine with config: &{Name:multinode-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:29:20.440564    4208 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:29:20.447902    4208 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:29:20.465542    4208 start.go:159] libmachine.API.Create for "multinode-887000" (driver="qemu2")
	I0723 07:29:20.465571    4208 client.go:168] LocalClient.Create starting
	I0723 07:29:20.465634    4208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:29:20.465664    4208 main.go:141] libmachine: Decoding PEM data...
	I0723 07:29:20.465675    4208 main.go:141] libmachine: Parsing certificate...
	I0723 07:29:20.465714    4208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:29:20.465736    4208 main.go:141] libmachine: Decoding PEM data...
	I0723 07:29:20.465751    4208 main.go:141] libmachine: Parsing certificate...
	I0723 07:29:20.466098    4208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:29:20.613887    4208 main.go:141] libmachine: Creating SSH key...
	I0723 07:29:20.744514    4208 main.go:141] libmachine: Creating Disk image...
	I0723 07:29:20.744520    4208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:29:20.744724    4208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:29:20.754278    4208 main.go:141] libmachine: STDOUT: 
	I0723 07:29:20.754299    4208 main.go:141] libmachine: STDERR: 
	I0723 07:29:20.754351    4208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2 +20000M
	I0723 07:29:20.762144    4208 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:29:20.762166    4208 main.go:141] libmachine: STDERR: 
	I0723 07:29:20.762179    4208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:29:20.762184    4208 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:29:20.762193    4208 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:29:20.762216    4208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:83:f2:41:79:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:29:20.763873    4208 main.go:141] libmachine: STDOUT: 
	I0723 07:29:20.763892    4208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:29:20.763910    4208 client.go:171] duration metric: took 298.338667ms to LocalClient.Create
	I0723 07:29:22.766058    4208 start.go:128] duration metric: took 2.325514542s to createHost
	I0723 07:29:22.766119    4208 start.go:83] releasing machines lock for "multinode-887000", held for 2.325628708s
	W0723 07:29:22.766247    4208 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:29:22.782315    4208 out.go:177] * Deleting "multinode-887000" in qemu2 ...
	W0723 07:29:22.808359    4208 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:29:22.808415    4208 start.go:729] Will try again in 5 seconds ...
	I0723 07:29:27.810558    4208 start.go:360] acquireMachinesLock for multinode-887000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:29:27.811002    4208 start.go:364] duration metric: took 360.541µs to acquireMachinesLock for "multinode-887000"
	I0723 07:29:27.811136    4208 start.go:93] Provisioning new machine with config: &{Name:multinode-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:29:27.811406    4208 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:29:27.827730    4208 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:29:27.877818    4208 start.go:159] libmachine.API.Create for "multinode-887000" (driver="qemu2")
	I0723 07:29:27.877884    4208 client.go:168] LocalClient.Create starting
	I0723 07:29:27.878000    4208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:29:27.878063    4208 main.go:141] libmachine: Decoding PEM data...
	I0723 07:29:27.878077    4208 main.go:141] libmachine: Parsing certificate...
	I0723 07:29:27.878135    4208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:29:27.878183    4208 main.go:141] libmachine: Decoding PEM data...
	I0723 07:29:27.878200    4208 main.go:141] libmachine: Parsing certificate...
	I0723 07:29:27.878782    4208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:29:28.038585    4208 main.go:141] libmachine: Creating SSH key...
	I0723 07:29:28.079273    4208 main.go:141] libmachine: Creating Disk image...
	I0723 07:29:28.079278    4208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:29:28.079460    4208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:29:28.088592    4208 main.go:141] libmachine: STDOUT: 
	I0723 07:29:28.088611    4208 main.go:141] libmachine: STDERR: 
	I0723 07:29:28.088663    4208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2 +20000M
	I0723 07:29:28.096705    4208 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:29:28.096719    4208 main.go:141] libmachine: STDERR: 
	I0723 07:29:28.096730    4208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:29:28.096735    4208 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:29:28.096745    4208 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:29:28.096772    4208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d3:f1:b5:14:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:29:28.098452    4208 main.go:141] libmachine: STDOUT: 
	I0723 07:29:28.098465    4208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:29:28.098478    4208 client.go:171] duration metric: took 220.591917ms to LocalClient.Create
	I0723 07:29:30.100611    4208 start.go:128] duration metric: took 2.289212625s to createHost
	I0723 07:29:30.100673    4208 start.go:83] releasing machines lock for "multinode-887000", held for 2.289684708s
	W0723 07:29:30.101055    4208 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-887000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-887000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:29:30.109427    4208 out.go:177] 
	W0723 07:29:30.115547    4208 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:29:30.115591    4208 out.go:239] * 
	* 
	W0723 07:29:30.118328    4208 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:29:30.126489    4208 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-887000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (65.620291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (89.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (128.882ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-887000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- rollout status deployment/busybox: exit status 1 (57.687083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.754625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.125875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.793083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.509833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.1035ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.838458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0723 07:29:46.554382    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.872625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.627709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.118417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.88525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.959209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.202667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.213084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.434084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.842625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (29.020125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (89.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-887000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.415208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (29.604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-887000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-887000 -v 3 --alsologtostderr: exit status 83 (42.522417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-887000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-887000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:30:59.727373    4588 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:30:59.727508    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:30:59.727512    4588 out.go:304] Setting ErrFile to fd 2...
	I0723 07:30:59.727514    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:30:59.727623    4588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:30:59.727849    4588 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:30:59.728045    4588 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:30:59.733126    4588 out.go:177] * The control-plane node multinode-887000 host is not running: state=Stopped
	I0723 07:30:59.737044    4588 out.go:177]   To start a cluster, run: "minikube start -p multinode-887000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-887000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (28.552625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-887000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-887000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.911291ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-887000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-887000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-887000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (28.999666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-887000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-887000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-887000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-887000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (28.878625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status --output json --alsologtostderr: exit status 7 (28.989583ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-887000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:30:59.930429    4600 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:30:59.930578    4600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:30:59.930581    4600 out.go:304] Setting ErrFile to fd 2...
	I0723 07:30:59.930583    4600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:30:59.930709    4600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:30:59.930833    4600 out.go:298] Setting JSON to true
	I0723 07:30:59.930848    4600 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:30:59.930890    4600 notify.go:220] Checking for updates...
	I0723 07:30:59.931037    4600 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:30:59.931044    4600 status.go:255] checking status of multinode-887000 ...
	I0723 07:30:59.931232    4600 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:30:59.931236    4600 status.go:343] host is not running, skipping remaining checks
	I0723 07:30:59.931238    4600 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-887000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (28.289084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 node stop m03: exit status 85 (46.877375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-887000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status: exit status 7 (29.190334ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr: exit status 7 (28.885166ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:00.064413    4608 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:00.064554    4608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:00.064557    4608 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:00.064560    4608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:00.064683    4608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:00.064814    4608 out.go:298] Setting JSON to false
	I0723 07:31:00.064823    4608 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:00.064874    4608 notify.go:220] Checking for updates...
	I0723 07:31:00.065032    4608 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:00.065039    4608 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:00.065251    4608 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:00.065254    4608 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:00.065256    4608 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr": multinode-887000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (29.533292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 node start m03 -v=7 --alsologtostderr: exit status 85 (43.93425ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:00.123014    4612 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:00.123252    4612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:00.123256    4612 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:00.123258    4612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:00.123385    4612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:00.123602    4612 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:00.123784    4612 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:00.127014    4612 out.go:177] 
	W0723 07:31:00.130001    4612 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0723 07:31:00.130006    4612 out.go:239] * 
	* 
	W0723 07:31:00.131641    4612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:31:00.135022    4612 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0723 07:31:00.123014    4612 out.go:291] Setting OutFile to fd 1 ...
I0723 07:31:00.123252    4612 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:31:00.123256    4612 out.go:304] Setting ErrFile to fd 2...
I0723 07:31:00.123258    4612 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:31:00.123385    4612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
I0723 07:31:00.123602    4612 mustload.go:65] Loading cluster: multinode-887000
I0723 07:31:00.123784    4612 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:31:00.127014    4612 out.go:177] 
W0723 07:31:00.130001    4612 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0723 07:31:00.130006    4612 out.go:239] * 
* 
W0723 07:31:00.131641    4612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0723 07:31:00.135022    4612 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-887000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr: exit status 7 (29.027875ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:00.167411    4614 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:00.167549    4614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:00.167552    4614 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:00.167555    4614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:00.167675    4614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:00.167800    4614 out.go:298] Setting JSON to false
	I0723 07:31:00.167809    4614 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:00.167874    4614 notify.go:220] Checking for updates...
	I0723 07:31:00.168031    4614 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:00.168037    4614 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:00.168235    4614 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:00.168239    4614 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:00.168241    4614 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr: exit status 7 (71.350917ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:01.024779    4616 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:01.024971    4616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:01.024975    4616 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:01.024978    4616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:01.025162    4616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:01.025306    4616 out.go:298] Setting JSON to false
	I0723 07:31:01.025318    4616 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:01.025358    4616 notify.go:220] Checking for updates...
	I0723 07:31:01.025578    4616 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:01.025593    4616 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:01.025888    4616 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:01.025893    4616 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:01.025896    4616 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr: exit status 7 (72.012792ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:02.014650    4618 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:02.014818    4618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:02.014823    4618 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:02.014826    4618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:02.015005    4618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:02.015152    4618 out.go:298] Setting JSON to false
	I0723 07:31:02.015164    4618 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:02.015212    4618 notify.go:220] Checking for updates...
	I0723 07:31:02.015411    4618 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:02.015419    4618 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:02.015690    4618 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:02.015695    4618 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:02.015698    4618 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr: exit status 7 (72.439792ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:04.934132    4620 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:04.934366    4620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:04.934371    4620 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:04.934374    4620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:04.934595    4620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:04.934779    4620 out.go:298] Setting JSON to false
	I0723 07:31:04.934796    4620 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:04.934835    4620 notify.go:220] Checking for updates...
	I0723 07:31:04.935058    4620 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:04.935067    4620 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:04.935357    4620 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:04.935362    4620 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:04.935365    4620 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr: exit status 7 (72.085666ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:07.018120    4622 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:07.018325    4622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:07.018330    4622 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:07.018334    4622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:07.018554    4622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:07.018723    4622 out.go:298] Setting JSON to false
	I0723 07:31:07.018737    4622 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:07.018784    4622 notify.go:220] Checking for updates...
	I0723 07:31:07.019027    4622 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:07.019048    4622 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:07.019350    4622 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:07.019355    4622 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:07.019358    4622 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr: exit status 7 (74.631584ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:12.742001    4632 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:12.742223    4632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:12.742227    4632 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:12.742230    4632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:12.742411    4632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:12.742591    4632 out.go:298] Setting JSON to false
	I0723 07:31:12.742604    4632 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:12.742644    4632 notify.go:220] Checking for updates...
	I0723 07:31:12.742888    4632 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:12.742898    4632 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:12.743179    4632 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:12.743184    4632 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:12.743187    4632 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0723 07:31:21.266189    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr: exit status 7 (71.796167ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:22.502837    4634 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:22.503072    4634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:22.503077    4634 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:22.503081    4634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:22.503267    4634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:22.503410    4634 out.go:298] Setting JSON to false
	I0723 07:31:22.503425    4634 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:22.503474    4634 notify.go:220] Checking for updates...
	I0723 07:31:22.503662    4634 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:22.503670    4634 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:22.503950    4634 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:22.503955    4634 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:22.503958    4634 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr: exit status 7 (72.323292ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:35.686337    4644 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:35.686541    4644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:35.686545    4644 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:35.686548    4644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:35.686712    4644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:35.686864    4644 out.go:298] Setting JSON to false
	I0723 07:31:35.686877    4644 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:35.686912    4644 notify.go:220] Checking for updates...
	I0723 07:31:35.687123    4644 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:35.687135    4644 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:35.687408    4644 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:35.687413    4644 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:35.687416    4644 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-887000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (32.852417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (35.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-887000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-887000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-887000: (2.979005209s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-887000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-887000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.218784125s)

                                                
                                                
-- stdout --
	* [multinode-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-887000" primary control-plane node in "multinode-887000" cluster
	* Restarting existing qemu2 VM for "multinode-887000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-887000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:38.791246    4668 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:38.791445    4668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:38.791449    4668 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:38.791452    4668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:38.791612    4668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:38.792847    4668 out.go:298] Setting JSON to false
	I0723 07:31:38.811852    4668 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3662,"bootTime":1721741436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:31:38.811922    4668 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:31:38.816870    4668 out.go:177] * [multinode-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:31:38.823730    4668 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:31:38.823757    4668 notify.go:220] Checking for updates...
	I0723 07:31:38.830818    4668 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:31:38.833861    4668 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:31:38.836811    4668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:31:38.839810    4668 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:31:38.842750    4668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:31:38.846089    4668 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:38.846144    4668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:31:38.849795    4668 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:31:38.856741    4668 start.go:297] selected driver: qemu2
	I0723 07:31:38.856748    4668 start.go:901] validating driver "qemu2" against &{Name:multinode-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:31:38.856793    4668 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:31:38.858930    4668 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:31:38.858973    4668 cni.go:84] Creating CNI manager for ""
	I0723 07:31:38.858978    4668 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0723 07:31:38.859020    4668 start.go:340] cluster config:
	{Name:multinode-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-887000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:31:38.862353    4668 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:31:38.869718    4668 out.go:177] * Starting "multinode-887000" primary control-plane node in "multinode-887000" cluster
	I0723 07:31:38.873778    4668 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:31:38.873793    4668 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:31:38.873802    4668 cache.go:56] Caching tarball of preloaded images
	I0723 07:31:38.873864    4668 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:31:38.873870    4668 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:31:38.873919    4668 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/multinode-887000/config.json ...
	I0723 07:31:38.874334    4668 start.go:360] acquireMachinesLock for multinode-887000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:31:38.874369    4668 start.go:364] duration metric: took 29.083µs to acquireMachinesLock for "multinode-887000"
	I0723 07:31:38.874380    4668 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:31:38.874386    4668 fix.go:54] fixHost starting: 
	I0723 07:31:38.874506    4668 fix.go:112] recreateIfNeeded on multinode-887000: state=Stopped err=<nil>
	W0723 07:31:38.874515    4668 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:31:38.882787    4668 out.go:177] * Restarting existing qemu2 VM for "multinode-887000" ...
	I0723 07:31:38.886822    4668 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:31:38.886856    4668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d3:f1:b5:14:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:31:38.888840    4668 main.go:141] libmachine: STDOUT: 
	I0723 07:31:38.888865    4668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:31:38.888892    4668 fix.go:56] duration metric: took 14.507208ms for fixHost
	I0723 07:31:38.888897    4668 start.go:83] releasing machines lock for "multinode-887000", held for 14.522917ms
	W0723 07:31:38.888904    4668 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:31:38.888933    4668 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:31:38.888938    4668 start.go:729] Will try again in 5 seconds ...
	I0723 07:31:43.890983    4668 start.go:360] acquireMachinesLock for multinode-887000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:31:43.891336    4668 start.go:364] duration metric: took 283.125µs to acquireMachinesLock for "multinode-887000"
	I0723 07:31:43.891460    4668 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:31:43.891477    4668 fix.go:54] fixHost starting: 
	I0723 07:31:43.892201    4668 fix.go:112] recreateIfNeeded on multinode-887000: state=Stopped err=<nil>
	W0723 07:31:43.892225    4668 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:31:43.900619    4668 out.go:177] * Restarting existing qemu2 VM for "multinode-887000" ...
	I0723 07:31:43.904543    4668 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:31:43.904769    4668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d3:f1:b5:14:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:31:43.913460    4668 main.go:141] libmachine: STDOUT: 
	I0723 07:31:43.913518    4668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:31:43.913576    4668 fix.go:56] duration metric: took 22.100875ms for fixHost
	I0723 07:31:43.913591    4668 start.go:83] releasing machines lock for "multinode-887000", held for 22.231209ms
	W0723 07:31:43.913741    4668 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-887000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-887000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:31:43.919471    4668 out.go:177] 
	W0723 07:31:43.923595    4668 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:31:43.923644    4668 out.go:239] * 
	* 
	W0723 07:31:43.926439    4668 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:31:43.935636    4668 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-887000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-887000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (31.254917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 node delete m03: exit status 83 (39.535834ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-887000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-887000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-887000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr: exit status 7 (28.598416ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:44.120719    4685 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:44.120869    4685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:44.120872    4685 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:44.120875    4685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:44.120992    4685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:44.121092    4685 out.go:298] Setting JSON to false
	I0723 07:31:44.121102    4685 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:44.121160    4685 notify.go:220] Checking for updates...
	I0723 07:31:44.121277    4685 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:44.121286    4685 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:44.121523    4685 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:44.121527    4685 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:44.121529    4685 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (29.621042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-887000 stop: (3.617206333s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status: exit status 7 (60.887375ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr: exit status 7 (31.907583ms)

                                                
                                                
-- stdout --
	multinode-887000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:47.860780    4709 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:47.860956    4709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:47.860960    4709 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:47.860962    4709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:47.861105    4709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:47.861205    4709 out.go:298] Setting JSON to false
	I0723 07:31:47.861216    4709 mustload.go:65] Loading cluster: multinode-887000
	I0723 07:31:47.861280    4709 notify.go:220] Checking for updates...
	I0723 07:31:47.861421    4709 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:47.861433    4709 status.go:255] checking status of multinode-887000 ...
	I0723 07:31:47.861644    4709 status.go:330] multinode-887000 host status = "Stopped" (err=<nil>)
	I0723 07:31:47.861648    4709 status.go:343] host is not running, skipping remaining checks
	I0723 07:31:47.861651    4709 status.go:257] multinode-887000 status: &{Name:multinode-887000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr": multinode-887000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-887000 status --alsologtostderr": multinode-887000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (29.250875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-887000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-887000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.176596625s)

                                                
                                                
-- stdout --
	* [multinode-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-887000" primary control-plane node in "multinode-887000" cluster
	* Restarting existing qemu2 VM for "multinode-887000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-887000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:31:47.919594    4713 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:31:47.919743    4713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:47.919746    4713 out.go:304] Setting ErrFile to fd 2...
	I0723 07:31:47.919749    4713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:31:47.919871    4713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:31:47.921077    4713 out.go:298] Setting JSON to false
	I0723 07:31:47.937209    4713 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3671,"bootTime":1721741436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:31:47.937295    4713 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:31:47.942446    4713 out.go:177] * [multinode-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:31:47.948271    4713 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:31:47.948342    4713 notify.go:220] Checking for updates...
	I0723 07:31:47.955362    4713 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:31:47.958297    4713 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:31:47.961332    4713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:31:47.964332    4713 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:31:47.965643    4713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:31:47.968659    4713 config.go:182] Loaded profile config "multinode-887000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:31:47.968916    4713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:31:47.973342    4713 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:31:47.978329    4713 start.go:297] selected driver: qemu2
	I0723 07:31:47.978337    4713 start.go:901] validating driver "qemu2" against &{Name:multinode-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:31:47.978413    4713 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:31:47.980692    4713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:31:47.980772    4713 cni.go:84] Creating CNI manager for ""
	I0723 07:31:47.980788    4713 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0723 07:31:47.980828    4713 start.go:340] cluster config:
	{Name:multinode-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-887000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:31:47.984152    4713 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:31:47.991229    4713 out.go:177] * Starting "multinode-887000" primary control-plane node in "multinode-887000" cluster
	I0723 07:31:47.995310    4713 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:31:47.995323    4713 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:31:47.995330    4713 cache.go:56] Caching tarball of preloaded images
	I0723 07:31:47.995379    4713 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:31:47.995385    4713 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:31:47.995433    4713 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/multinode-887000/config.json ...
	I0723 07:31:47.995830    4713 start.go:360] acquireMachinesLock for multinode-887000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:31:47.995857    4713 start.go:364] duration metric: took 21.458µs to acquireMachinesLock for "multinode-887000"
	I0723 07:31:47.995867    4713 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:31:47.995871    4713 fix.go:54] fixHost starting: 
	I0723 07:31:47.995985    4713 fix.go:112] recreateIfNeeded on multinode-887000: state=Stopped err=<nil>
	W0723 07:31:47.995993    4713 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:31:48.004331    4713 out.go:177] * Restarting existing qemu2 VM for "multinode-887000" ...
	I0723 07:31:48.008355    4713 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:31:48.008389    4713 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d3:f1:b5:14:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:31:48.010421    4713 main.go:141] libmachine: STDOUT: 
	I0723 07:31:48.010442    4713 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:31:48.010472    4713 fix.go:56] duration metric: took 14.599875ms for fixHost
	I0723 07:31:48.010476    4713 start.go:83] releasing machines lock for "multinode-887000", held for 14.615167ms
	W0723 07:31:48.010483    4713 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:31:48.010524    4713 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:31:48.010529    4713 start.go:729] Will try again in 5 seconds ...
	I0723 07:31:53.012604    4713 start.go:360] acquireMachinesLock for multinode-887000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:31:53.013011    4713 start.go:364] duration metric: took 300.709µs to acquireMachinesLock for "multinode-887000"
	I0723 07:31:53.013220    4713 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:31:53.013238    4713 fix.go:54] fixHost starting: 
	I0723 07:31:53.013927    4713 fix.go:112] recreateIfNeeded on multinode-887000: state=Stopped err=<nil>
	W0723 07:31:53.013957    4713 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:31:53.019460    4713 out.go:177] * Restarting existing qemu2 VM for "multinode-887000" ...
	I0723 07:31:53.023401    4713 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:31:53.023641    4713 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d3:f1:b5:14:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/multinode-887000/disk.qcow2
	I0723 07:31:53.032714    4713 main.go:141] libmachine: STDOUT: 
	I0723 07:31:53.032766    4713 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:31:53.032822    4713 fix.go:56] duration metric: took 19.586333ms for fixHost
	I0723 07:31:53.032839    4713 start.go:83] releasing machines lock for "multinode-887000", held for 19.773375ms
	W0723 07:31:53.033014    4713 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-887000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-887000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:31:53.041366    4713 out.go:177] 
	W0723 07:31:53.045316    4713 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:31:53.045342    4713 out.go:239] * 
	* 
	W0723 07:31:53.047614    4713 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:31:53.055379    4713 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-887000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (67.448167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-887000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-887000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-887000-m01 --driver=qemu2 : exit status 80 (9.860700666s)

                                                
                                                
-- stdout --
	* [multinode-887000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-887000-m01" primary control-plane node in "multinode-887000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-887000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-887000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-887000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-887000-m02 --driver=qemu2 : exit status 80 (9.911378291s)

                                                
                                                
-- stdout --
	* [multinode-887000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-887000-m02" primary control-plane node in "multinode-887000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-887000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-887000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-887000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-887000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-887000: exit status 83 (81.18725ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-887000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-887000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-887000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-887000 -n multinode-887000: exit status 7 (29.228209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-887000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.00s)

                                                
                                    
x
+
TestPreload (9.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-810000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-810000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.775880583s)

                                                
                                                
-- stdout --
	* [test-preload-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-810000" primary control-plane node in "test-preload-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:32:13.270387    4773 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:32:13.270752    4773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:32:13.270757    4773 out.go:304] Setting ErrFile to fd 2...
	I0723 07:32:13.270760    4773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:32:13.270948    4773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:32:13.272312    4773 out.go:298] Setting JSON to false
	I0723 07:32:13.288585    4773 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3697,"bootTime":1721741436,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:32:13.288653    4773 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:32:13.294847    4773 out.go:177] * [test-preload-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:32:13.302799    4773 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:32:13.302858    4773 notify.go:220] Checking for updates...
	I0723 07:32:13.309725    4773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:32:13.312770    4773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:32:13.315870    4773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:32:13.318758    4773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:32:13.321775    4773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:32:13.325167    4773 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:32:13.325232    4773 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:32:13.329688    4773 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:32:13.336797    4773 start.go:297] selected driver: qemu2
	I0723 07:32:13.336804    4773 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:32:13.336810    4773 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:32:13.339048    4773 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:32:13.341748    4773 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:32:13.344897    4773 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:32:13.344926    4773 cni.go:84] Creating CNI manager for ""
	I0723 07:32:13.344935    4773 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:32:13.344945    4773 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:32:13.344971    4773 start.go:340] cluster config:
	{Name:test-preload-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:32:13.348501    4773 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.355731    4773 out.go:177] * Starting "test-preload-810000" primary control-plane node in "test-preload-810000" cluster
	I0723 07:32:13.359785    4773 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0723 07:32:13.359854    4773 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/test-preload-810000/config.json ...
	I0723 07:32:13.359870    4773 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/test-preload-810000/config.json: {Name:mk7dd9a94c5cb26702dae84fcda7264cc56d5360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:32:13.359876    4773 cache.go:107] acquiring lock: {Name:mk65a64c1222dcf5a5836dc48db31002cffd4310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.359877    4773 cache.go:107] acquiring lock: {Name:mkdd881652b3b5a47463bb5e11f04d3fc7234f4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.359904    4773 cache.go:107] acquiring lock: {Name:mk8c3097c245fec273a0088ccfaf70ba4574244a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.360089    4773 cache.go:107] acquiring lock: {Name:mk329aa2317822cb3ddf92fc3c3bc7221c62a5e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.360103    4773 cache.go:107] acquiring lock: {Name:mk92b5deff41d6600829a88fc02bb3d18cfde1f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.360125    4773 cache.go:107] acquiring lock: {Name:mk2bd0c7e9ad0b0f664a1d65101afc484ee018b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.360184    4773 cache.go:107] acquiring lock: {Name:mkaef9e82143035774f9c6c74661da21bd30a171 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.360208    4773 start.go:360] acquireMachinesLock for test-preload-810000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:32:13.360227    4773 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0723 07:32:13.360232    4773 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0723 07:32:13.360228    4773 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0723 07:32:13.360243    4773 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0723 07:32:13.360268    4773 start.go:364] duration metric: took 43.542µs to acquireMachinesLock for "test-preload-810000"
	I0723 07:32:13.360262    4773 cache.go:107] acquiring lock: {Name:mk2a71a6434610b096e378b4295d658593f5f7cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:13.360369    4773 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0723 07:32:13.360413    4773 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0723 07:32:13.360445    4773 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:32:13.360306    4773 start.go:93] Provisioning new machine with config: &{Name:test-preload-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:32:13.360475    4773 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:32:13.360564    4773 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:32:13.368783    4773 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:32:13.373456    4773 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0723 07:32:13.373602    4773 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0723 07:32:13.374123    4773 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0723 07:32:13.374125    4773 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0723 07:32:13.376510    4773 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:32:13.376575    4773 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0723 07:32:13.376654    4773 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0723 07:32:13.376675    4773 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:32:13.387214    4773 start.go:159] libmachine.API.Create for "test-preload-810000" (driver="qemu2")
	I0723 07:32:13.387239    4773 client.go:168] LocalClient.Create starting
	I0723 07:32:13.387315    4773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:32:13.387345    4773 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:13.387355    4773 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:13.387396    4773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:32:13.387420    4773 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:13.387427    4773 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:13.387788    4773 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:32:13.537741    4773 main.go:141] libmachine: Creating SSH key...
	I0723 07:32:13.607012    4773 main.go:141] libmachine: Creating Disk image...
	I0723 07:32:13.607119    4773 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:32:13.607320    4773 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2
	I0723 07:32:13.617177    4773 main.go:141] libmachine: STDOUT: 
	I0723 07:32:13.617195    4773 main.go:141] libmachine: STDERR: 
	I0723 07:32:13.617244    4773 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2 +20000M
	I0723 07:32:13.626040    4773 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:32:13.626057    4773 main.go:141] libmachine: STDERR: 
	I0723 07:32:13.626092    4773 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2
	I0723 07:32:13.626098    4773 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:32:13.626113    4773 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:32:13.626141    4773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:9a:f0:c4:bb:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2
	I0723 07:32:13.627980    4773 main.go:141] libmachine: STDOUT: 
	I0723 07:32:13.628009    4773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:32:13.628029    4773 client.go:171] duration metric: took 240.790625ms to LocalClient.Create
	I0723 07:32:13.814234    4773 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0723 07:32:13.835653    4773 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0723 07:32:13.837567    4773 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0723 07:32:13.859259    4773 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0723 07:32:13.859285    4773 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0723 07:32:13.879266    4773 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0723 07:32:13.897156    4773 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0723 07:32:13.918508    4773 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0723 07:32:13.943891    4773 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0723 07:32:13.943914    4773 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 583.782917ms
	I0723 07:32:13.943934    4773 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0723 07:32:14.301132    4773 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0723 07:32:14.301219    4773 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0723 07:32:14.578348    4773 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0723 07:32:14.578394    4773 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.2185375s
	I0723 07:32:14.578422    4773 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0723 07:32:15.628288    4773 start.go:128] duration metric: took 2.267818083s to createHost
	I0723 07:32:15.628364    4773 start.go:83] releasing machines lock for "test-preload-810000", held for 2.268123042s
	W0723 07:32:15.628415    4773 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:15.639526    4773 out.go:177] * Deleting "test-preload-810000" in qemu2 ...
	W0723 07:32:15.667065    4773 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:15.667093    4773 start.go:729] Will try again in 5 seconds ...
	I0723 07:32:16.046724    4773 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0723 07:32:16.046765    4773 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.686741875s
	I0723 07:32:16.046794    4773 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0723 07:32:16.356962    4773 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0723 07:32:16.357174    4773 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.996955542s
	I0723 07:32:16.357213    4773 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0723 07:32:17.216994    4773 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0723 07:32:17.217039    4773 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 3.857233542s
	I0723 07:32:17.217064    4773 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0723 07:32:17.475063    4773 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0723 07:32:17.475105    4773 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.115105584s
	I0723 07:32:17.475129    4773 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0723 07:32:18.255579    4773 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0723 07:32:18.255640    4773 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.895824291s
	I0723 07:32:18.255666    4773 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0723 07:32:20.667665    4773 start.go:360] acquireMachinesLock for test-preload-810000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:32:20.668102    4773 start.go:364] duration metric: took 356.375µs to acquireMachinesLock for "test-preload-810000"
	I0723 07:32:20.668213    4773 start.go:93] Provisioning new machine with config: &{Name:test-preload-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:32:20.668472    4773 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:32:20.673970    4773 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:32:20.725851    4773 start.go:159] libmachine.API.Create for "test-preload-810000" (driver="qemu2")
	I0723 07:32:20.726017    4773 client.go:168] LocalClient.Create starting
	I0723 07:32:20.726124    4773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:32:20.726183    4773 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:20.726198    4773 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:20.726255    4773 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:32:20.726330    4773 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:20.726343    4773 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:20.726885    4773 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:32:20.883653    4773 main.go:141] libmachine: Creating SSH key...
	I0723 07:32:20.948332    4773 main.go:141] libmachine: Creating Disk image...
	I0723 07:32:20.948338    4773 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:32:20.948523    4773 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2
	I0723 07:32:20.957900    4773 main.go:141] libmachine: STDOUT: 
	I0723 07:32:20.958091    4773 main.go:141] libmachine: STDERR: 
	I0723 07:32:20.958139    4773 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2 +20000M
	I0723 07:32:20.966292    4773 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:32:20.966393    4773 main.go:141] libmachine: STDERR: 
	I0723 07:32:20.966404    4773 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2
	I0723 07:32:20.966409    4773 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:32:20.966421    4773 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:32:20.966453    4773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:8c:14:bc:9c:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/test-preload-810000/disk.qcow2
	I0723 07:32:20.968198    4773 main.go:141] libmachine: STDOUT: 
	I0723 07:32:20.968298    4773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:32:20.968310    4773 client.go:171] duration metric: took 242.291917ms to LocalClient.Create
	I0723 07:32:22.384625    4773 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0723 07:32:22.384684    4773 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.024742833s
	I0723 07:32:22.384708    4773 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0723 07:32:22.384765    4773 cache.go:87] Successfully saved all images to host disk.
	I0723 07:32:22.970533    4773 start.go:128] duration metric: took 2.302068291s to createHost
	I0723 07:32:22.970608    4773 start.go:83] releasing machines lock for "test-preload-810000", held for 2.302524958s
	W0723 07:32:22.970911    4773 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:22.983533    4773 out.go:177] 
	W0723 07:32:22.987624    4773 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:32:22.987655    4773 out.go:239] * 
	* 
	W0723 07:32:22.990402    4773 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:32:23.003450    4773 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-810000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-23 07:32:23.021512 -0700 PDT m=+2205.666706042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-810000 -n test-preload-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-810000 -n test-preload-810000: exit status 7 (66.044958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-810000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-810000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-810000
--- FAIL: TestPreload (9.92s)

                                                
                                    
x
+
TestScheduledStopUnix (9.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-509000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-509000 --memory=2048 --driver=qemu2 : exit status 80 (9.795858542s)

                                                
                                                
-- stdout --
	* [scheduled-stop-509000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-509000" primary control-plane node in "scheduled-stop-509000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-509000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-509000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-509000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-509000" primary control-plane node in "scheduled-stop-509000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-509000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-509000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-23 07:32:32.960944 -0700 PDT m=+2215.606318334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-509000 -n scheduled-stop-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-509000 -n scheduled-stop-509000: exit status 7 (67.003083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-509000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-509000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-509000
--- FAIL: TestScheduledStopUnix (9.94s)

                                                
                                    
x
+
TestSkaffold (12.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2206101855 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2206101855 version: (1.057404792s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-848000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-848000 --memory=2600 --driver=qemu2 : exit status 80 (10.16519425s)

                                                
                                                
-- stdout --
	* [skaffold-848000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-848000" primary control-plane node in "skaffold-848000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-848000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-848000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-848000" primary control-plane node in "skaffold-848000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-848000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-23 07:32:45.540739 -0700 PDT m=+2228.186340084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-848000 -n skaffold-848000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-848000 -n skaffold-848000: exit status 7 (61.251334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-848000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-848000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-848000
--- FAIL: TestSkaffold (12.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (628.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4196918896 start -p running-upgrade-350000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4196918896 start -p running-upgrade-350000 --memory=2200 --vm-driver=qemu2 : (58.228926625s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-350000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0723 07:34:46.550155    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:36:21.260769    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:37:49.614482    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:39:46.541475    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:41:21.246573    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-350000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m56.089244458s)

                                                
                                                
-- stdout --
	* [running-upgrade-350000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-350000" primary control-plane node in "running-upgrade-350000" cluster
	* Updating the running qemu2 "running-upgrade-350000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:34:06.906248    5099 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:34:06.906385    5099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:34:06.906389    5099 out.go:304] Setting ErrFile to fd 2...
	I0723 07:34:06.906391    5099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:34:06.906521    5099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:34:06.907648    5099 out.go:298] Setting JSON to false
	I0723 07:34:06.924783    5099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3810,"bootTime":1721741436,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:34:06.924887    5099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:34:06.929652    5099 out.go:177] * [running-upgrade-350000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:34:06.936688    5099 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:34:06.936766    5099 notify.go:220] Checking for updates...
	I0723 07:34:06.944645    5099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:34:06.947612    5099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:34:06.950632    5099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:34:06.953664    5099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:34:06.955001    5099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:34:06.957857    5099 config.go:182] Loaded profile config "running-upgrade-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0723 07:34:06.960575    5099 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0723 07:34:06.963679    5099 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:34:06.967613    5099 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:34:06.974607    5099 start.go:297] selected driver: qemu2
	I0723 07:34:06.974612    5099 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0723 07:34:06.974671    5099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:34:06.976882    5099 cni.go:84] Creating CNI manager for ""
	I0723 07:34:06.976898    5099 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:34:06.976928    5099 start.go:340] cluster config:
	{Name:running-upgrade-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0723 07:34:06.976976    5099 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:34:06.983643    5099 out.go:177] * Starting "running-upgrade-350000" primary control-plane node in "running-upgrade-350000" cluster
	I0723 07:34:06.987608    5099 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0723 07:34:06.987620    5099 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0723 07:34:06.987625    5099 cache.go:56] Caching tarball of preloaded images
	I0723 07:34:06.987670    5099 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:34:06.987675    5099 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0723 07:34:06.987717    5099 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/config.json ...
	I0723 07:34:06.988029    5099 start.go:360] acquireMachinesLock for running-upgrade-350000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:34:22.415138    5099 start.go:364] duration metric: took 15.427368042s to acquireMachinesLock for "running-upgrade-350000"
	I0723 07:34:22.415168    5099 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:34:22.415176    5099 fix.go:54] fixHost starting: 
	I0723 07:34:22.416096    5099 fix.go:112] recreateIfNeeded on running-upgrade-350000: state=Running err=<nil>
	W0723 07:34:22.416108    5099 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:34:22.420229    5099 out.go:177] * Updating the running qemu2 "running-upgrade-350000" VM ...
	I0723 07:34:22.424235    5099 machine.go:94] provisionDockerMachine start ...
	I0723 07:34:22.424289    5099 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:22.424416    5099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b6ea10] 0x104b71270 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0723 07:34:22.424420    5099 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 07:34:22.483835    5099 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-350000
	
	I0723 07:34:22.483855    5099 buildroot.go:166] provisioning hostname "running-upgrade-350000"
	I0723 07:34:22.483901    5099 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:22.484032    5099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b6ea10] 0x104b71270 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0723 07:34:22.484042    5099 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-350000 && echo "running-upgrade-350000" | sudo tee /etc/hostname
	I0723 07:34:22.563984    5099 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-350000
	
	I0723 07:34:22.564042    5099 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:22.564165    5099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b6ea10] 0x104b71270 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0723 07:34:22.564174    5099 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-350000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-350000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-350000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 07:34:22.622435    5099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 07:34:22.622447    5099 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19319-1567/.minikube CaCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19319-1567/.minikube}
	I0723 07:34:22.622466    5099 buildroot.go:174] setting up certificates
	I0723 07:34:22.622471    5099 provision.go:84] configureAuth start
	I0723 07:34:22.622478    5099 provision.go:143] copyHostCerts
	I0723 07:34:22.622546    5099 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem, removing ...
	I0723 07:34:22.622553    5099 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem
	I0723 07:34:22.622644    5099 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem (1078 bytes)
	I0723 07:34:22.622822    5099 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem, removing ...
	I0723 07:34:22.622827    5099 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem
	I0723 07:34:22.622869    5099 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem (1123 bytes)
	I0723 07:34:22.622968    5099 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem, removing ...
	I0723 07:34:22.622973    5099 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem
	I0723 07:34:22.623008    5099 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem (1679 bytes)
	I0723 07:34:22.623098    5099 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-350000 san=[127.0.0.1 localhost minikube running-upgrade-350000]
	I0723 07:34:22.710377    5099 provision.go:177] copyRemoteCerts
	I0723 07:34:22.710424    5099 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 07:34:22.710435    5099 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/running-upgrade-350000/id_rsa Username:docker}
	I0723 07:34:22.742258    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 07:34:22.750020    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0723 07:34:22.757258    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 07:34:22.764926    5099 provision.go:87] duration metric: took 142.452209ms to configureAuth
	I0723 07:34:22.764937    5099 buildroot.go:189] setting minikube options for container-runtime
	I0723 07:34:22.765062    5099 config.go:182] Loaded profile config "running-upgrade-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0723 07:34:22.765096    5099 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:22.765189    5099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b6ea10] 0x104b71270 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0723 07:34:22.765193    5099 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0723 07:34:22.823255    5099 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0723 07:34:22.823266    5099 buildroot.go:70] root file system type: tmpfs
	I0723 07:34:22.823322    5099 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0723 07:34:22.823374    5099 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:22.823487    5099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b6ea10] 0x104b71270 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0723 07:34:22.823521    5099 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0723 07:34:22.884823    5099 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0723 07:34:22.884922    5099 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:22.885239    5099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b6ea10] 0x104b71270 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0723 07:34:22.885260    5099 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0723 07:34:22.945359    5099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 07:34:22.945371    5099 machine.go:97] duration metric: took 521.139917ms to provisionDockerMachine
	I0723 07:34:22.945377    5099 start.go:293] postStartSetup for "running-upgrade-350000" (driver="qemu2")
	I0723 07:34:22.945383    5099 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 07:34:22.945439    5099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 07:34:22.945450    5099 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/running-upgrade-350000/id_rsa Username:docker}
	I0723 07:34:22.975733    5099 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 07:34:22.976964    5099 info.go:137] Remote host: Buildroot 2021.02.12
	I0723 07:34:22.976971    5099 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19319-1567/.minikube/addons for local assets ...
	I0723 07:34:22.977047    5099 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19319-1567/.minikube/files for local assets ...
	I0723 07:34:22.977137    5099 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem -> 20652.pem in /etc/ssl/certs
	I0723 07:34:22.977228    5099 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 07:34:22.980179    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem --> /etc/ssl/certs/20652.pem (1708 bytes)
	I0723 07:34:22.986897    5099 start.go:296] duration metric: took 41.516292ms for postStartSetup
	I0723 07:34:22.986912    5099 fix.go:56] duration metric: took 571.748417ms for fixHost
	I0723 07:34:22.986952    5099 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:22.987054    5099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b6ea10] 0x104b71270 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0723 07:34:22.987058    5099 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0723 07:34:23.043998    5099 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721745262.821373584
	
	I0723 07:34:23.044006    5099 fix.go:216] guest clock: 1721745262.821373584
	I0723 07:34:23.044010    5099 fix.go:229] Guest: 2024-07-23 07:34:22.821373584 -0700 PDT Remote: 2024-07-23 07:34:22.986914 -0700 PDT m=+16.100872001 (delta=-165.540416ms)
	I0723 07:34:23.044019    5099 fix.go:200] guest clock delta is within tolerance: -165.540416ms
	I0723 07:34:23.044022    5099 start.go:83] releasing machines lock for "running-upgrade-350000", held for 628.878708ms
	I0723 07:34:23.044077    5099 ssh_runner.go:195] Run: cat /version.json
	I0723 07:34:23.044086    5099 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/running-upgrade-350000/id_rsa Username:docker}
	I0723 07:34:23.044077    5099 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 07:34:23.044122    5099 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/running-upgrade-350000/id_rsa Username:docker}
	W0723 07:34:23.044613    5099 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50274: connect: connection refused
	I0723 07:34:23.044637    5099 retry.go:31] will retry after 191.227356ms: dial tcp [::1]:50274: connect: connection refused
	W0723 07:34:23.072659    5099 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0723 07:34:23.072708    5099 ssh_runner.go:195] Run: systemctl --version
	I0723 07:34:23.074420    5099 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 07:34:23.075954    5099 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 07:34:23.075979    5099 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0723 07:34:23.078694    5099 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0723 07:34:23.083254    5099 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 07:34:23.083261    5099 start.go:495] detecting cgroup driver to use...
	I0723 07:34:23.083328    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 07:34:23.088238    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0723 07:34:23.091385    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0723 07:34:23.094370    5099 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0723 07:34:23.094397    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0723 07:34:23.097552    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0723 07:34:23.100914    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0723 07:34:23.104311    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0723 07:34:23.107271    5099 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 07:34:23.110090    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0723 07:34:23.113043    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0723 07:34:23.116306    5099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0723 07:34:23.119658    5099 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 07:34:23.122245    5099 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 07:34:23.124952    5099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:23.226463    5099 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0723 07:34:23.233428    5099 start.go:495] detecting cgroup driver to use...
	I0723 07:34:23.233511    5099 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0723 07:34:23.249135    5099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 07:34:23.254752    5099 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 07:34:23.265720    5099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 07:34:23.270987    5099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0723 07:34:23.311382    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 07:34:23.316816    5099 ssh_runner.go:195] Run: which cri-dockerd
	I0723 07:34:23.318156    5099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0723 07:34:23.321440    5099 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0723 07:34:23.326344    5099 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0723 07:34:23.428084    5099 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0723 07:34:23.538115    5099 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0723 07:34:23.538169    5099 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0723 07:34:23.543415    5099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:23.651473    5099 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0723 07:34:40.148970    5099 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.497778166s)
	I0723 07:34:40.149050    5099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0723 07:34:40.153938    5099 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0723 07:34:40.162293    5099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0723 07:34:40.167214    5099 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0723 07:34:40.253082    5099 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0723 07:34:40.337120    5099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:40.419577    5099 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0723 07:34:40.425121    5099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0723 07:34:40.429537    5099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:40.515645    5099 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0723 07:34:40.555036    5099 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0723 07:34:40.555120    5099 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0723 07:34:40.558366    5099 start.go:563] Will wait 60s for crictl version
	I0723 07:34:40.558423    5099 ssh_runner.go:195] Run: which crictl
	I0723 07:34:40.559891    5099 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 07:34:40.572519    5099 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0723 07:34:40.572588    5099 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0723 07:34:40.586919    5099 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0723 07:34:40.603013    5099 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0723 07:34:40.603086    5099 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0723 07:34:40.604377    5099 kubeadm.go:883] updating cluster {Name:running-upgrade-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0723 07:34:40.604421    5099 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0723 07:34:40.604463    5099 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0723 07:34:40.615258    5099 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0723 07:34:40.615277    5099 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0723 07:34:40.615323    5099 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0723 07:34:40.618639    5099 ssh_runner.go:195] Run: which lz4
	I0723 07:34:40.619890    5099 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0723 07:34:40.621163    5099 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 07:34:40.621174    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0723 07:34:41.509890    5099 docker.go:649] duration metric: took 890.043167ms to copy over tarball
	I0723 07:34:41.509935    5099 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 07:34:42.871332    5099 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.361405792s)
	I0723 07:34:42.871346    5099 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 07:34:42.887458    5099 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0723 07:34:42.890888    5099 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0723 07:34:42.896507    5099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:42.977299    5099 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0723 07:34:44.307022    5099 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.329730875s)
	I0723 07:34:44.307116    5099 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0723 07:34:44.327746    5099 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0723 07:34:44.327755    5099 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0723 07:34:44.327759    5099 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 07:34:44.335700    5099 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:44.336822    5099 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:44.337972    5099 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:44.338009    5099 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:44.339943    5099 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:44.339952    5099 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:44.341364    5099 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:44.341482    5099 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0723 07:34:44.344341    5099 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:44.344361    5099 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:44.347570    5099 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0723 07:34:44.347577    5099 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:44.349275    5099 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:44.349317    5099 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:44.351000    5099 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:44.351819    5099 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:44.750133    5099 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:44.761779    5099 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0723 07:34:44.761811    5099 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:44.761869    5099 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:44.772722    5099 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0723 07:34:44.779418    5099 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0723 07:34:44.780496    5099 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:44.785223    5099 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:44.790249    5099 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0723 07:34:44.790271    5099 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0723 07:34:44.790327    5099 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0723 07:34:44.798186    5099 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0723 07:34:44.798208    5099 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:44.798265    5099 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:44.798441    5099 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:44.798694    5099 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0723 07:34:44.798704    5099 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:44.798726    5099 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:44.827061    5099 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0723 07:34:44.827093    5099 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0723 07:34:44.827114    5099 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:44.827061    5099 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0723 07:34:44.827159    5099 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:44.827173    5099 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0723 07:34:44.827204    5099 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0723 07:34:44.827254    5099 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0723 07:34:44.838130    5099 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0723 07:34:44.838262    5099 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:44.838334    5099 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0723 07:34:44.838353    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0723 07:34:44.838383    5099 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0723 07:34:44.838398    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0723 07:34:44.838411    5099 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0723 07:34:44.847586    5099 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0723 07:34:44.847606    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0723 07:34:44.854343    5099 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:44.865611    5099 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0723 07:34:44.865640    5099 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:44.865720    5099 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	W0723 07:34:44.951163    5099 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0723 07:34:44.951268    5099 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:44.963830    5099 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0723 07:34:44.963891    5099 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0723 07:34:44.963906    5099 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:44.963911    5099 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0723 07:34:44.963958    5099 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:44.964005    5099 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0723 07:34:44.980349    5099 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0723 07:34:44.980375    5099 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:44.980438    5099 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:44.989997    5099 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0723 07:34:44.990095    5099 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0723 07:34:44.990114    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0723 07:34:45.092728    5099 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0723 07:34:45.092744    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0723 07:34:45.196970    5099 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0723 07:34:45.196990    5099 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0723 07:34:45.196998    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0723 07:34:45.409962    5099 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0723 07:34:45.410003    5099 cache_images.go:92] duration metric: took 1.082252375s to LoadCachedImages
	W0723 07:34:45.410047    5099 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0723 07:34:45.410054    5099 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0723 07:34:45.410105    5099 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-350000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 07:34:45.410190    5099 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0723 07:34:45.424226    5099 cni.go:84] Creating CNI manager for ""
	I0723 07:34:45.424238    5099 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:34:45.424253    5099 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 07:34:45.424262    5099 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-350000 NodeName:running-upgrade-350000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 07:34:45.424333    5099 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-350000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 07:34:45.424387    5099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0723 07:34:45.427644    5099 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 07:34:45.427677    5099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 07:34:45.430513    5099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0723 07:34:45.435809    5099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 07:34:45.440971    5099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0723 07:34:45.446694    5099 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0723 07:34:45.448315    5099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:45.534047    5099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 07:34:45.539499    5099 certs.go:68] Setting up /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000 for IP: 10.0.2.15
	I0723 07:34:45.539507    5099 certs.go:194] generating shared ca certs ...
	I0723 07:34:45.539515    5099 certs.go:226] acquiring lock for ca certs: {Name:mk3c99e95d37931a4e7b239d14c48fdfa53d0dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:34:45.539669    5099 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.key
	I0723 07:34:45.539703    5099 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.key
	I0723 07:34:45.539709    5099 certs.go:256] generating profile certs ...
	I0723 07:34:45.539784    5099 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/client.key
	I0723 07:34:45.539806    5099 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.key.55eeabe8
	I0723 07:34:45.539818    5099 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.crt.55eeabe8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0723 07:34:45.587731    5099 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.crt.55eeabe8 ...
	I0723 07:34:45.587741    5099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.crt.55eeabe8: {Name:mkc064fe25c1435c06df28bb91778367cbdc026b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:34:45.588042    5099 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.key.55eeabe8 ...
	I0723 07:34:45.588047    5099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.key.55eeabe8: {Name:mk13f507c8fb39e5f8e9a355da156a15588a0c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:34:45.588181    5099 certs.go:381] copying /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.crt.55eeabe8 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.crt
	I0723 07:34:45.588329    5099 certs.go:385] copying /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.key.55eeabe8 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.key
	I0723 07:34:45.588472    5099 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/proxy-client.key
	I0723 07:34:45.588595    5099 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065.pem (1338 bytes)
	W0723 07:34:45.588618    5099 certs.go:480] ignoring /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065_empty.pem, impossibly tiny 0 bytes
	I0723 07:34:45.588625    5099 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 07:34:45.588644    5099 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem (1078 bytes)
	I0723 07:34:45.588664    5099 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem (1123 bytes)
	I0723 07:34:45.588688    5099 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem (1679 bytes)
	I0723 07:34:45.588727    5099 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem (1708 bytes)
	I0723 07:34:45.589151    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 07:34:45.597347    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0723 07:34:45.605016    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 07:34:45.612630    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0723 07:34:45.620717    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 07:34:45.628443    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 07:34:45.635661    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 07:34:45.643083    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 07:34:45.649835    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065.pem --> /usr/share/ca-certificates/2065.pem (1338 bytes)
	I0723 07:34:45.657261    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem --> /usr/share/ca-certificates/20652.pem (1708 bytes)
	I0723 07:34:45.664474    5099 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 07:34:45.671424    5099 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 07:34:45.676834    5099 ssh_runner.go:195] Run: openssl version
	I0723 07:34:45.678948    5099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 07:34:45.682081    5099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:34:45.683724    5099 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:34:45.683747    5099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:34:45.685570    5099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 07:34:45.688727    5099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2065.pem && ln -fs /usr/share/ca-certificates/2065.pem /etc/ssl/certs/2065.pem"
	I0723 07:34:45.691821    5099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2065.pem
	I0723 07:34:45.693263    5099 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:03 /usr/share/ca-certificates/2065.pem
	I0723 07:34:45.693283    5099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2065.pem
	I0723 07:34:45.695375    5099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2065.pem /etc/ssl/certs/51391683.0"
	I0723 07:34:45.697985    5099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20652.pem && ln -fs /usr/share/ca-certificates/20652.pem /etc/ssl/certs/20652.pem"
	I0723 07:34:45.701459    5099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20652.pem
	I0723 07:34:45.703260    5099 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:03 /usr/share/ca-certificates/20652.pem
	I0723 07:34:45.703278    5099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20652.pem
	I0723 07:34:45.705037    5099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20652.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 07:34:45.708117    5099 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 07:34:45.709661    5099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 07:34:45.711927    5099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 07:34:45.714175    5099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 07:34:45.716444    5099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 07:34:45.718471    5099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 07:34:45.720463    5099 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 07:34:45.722366    5099 kubeadm.go:392] StartCluster: {Name:running-upgrade-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50345 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0723 07:34:45.722443    5099 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0723 07:34:45.733359    5099 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 07:34:45.737819    5099 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 07:34:45.737825    5099 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 07:34:45.737849    5099 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 07:34:45.741512    5099 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:34:45.741787    5099 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-350000" does not appear in /Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:34:45.741880    5099 kubeconfig.go:62] /Users/jenkins/minikube-integration/19319-1567/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-350000" cluster setting kubeconfig missing "running-upgrade-350000" context setting]
	I0723 07:34:45.742091    5099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/kubeconfig: {Name:mkd61b3eb94b798a54b8f29057406aee7268d37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:34:45.742497    5099 kapi.go:59] client config for running-upgrade-350000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/client.key", CAFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f03fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0723 07:34:45.742813    5099 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 07:34:45.745567    5099 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-350000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0723 07:34:45.745572    5099 kubeadm.go:1160] stopping kube-system containers ...
	I0723 07:34:45.745616    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0723 07:34:45.760608    5099 docker.go:483] Stopping containers: [b03203302c1a 738892dde3c7 16888ddea83e 77c600612208 c214bc94f819 808a5a087b58 df9bb9ec4705 9be331844d55 28b08651520e 801f6529a899 7f4d7b5be87e c84c78d695a2 4f633e2a0faf 8e1ac841589d 160673dd833b 07423546a90a a4e2549e5881 4bbf923dfbce eedef76f65ef 582538b0952c 36e637819fb8 8f5d2c0dcc89]
	I0723 07:34:45.760681    5099 ssh_runner.go:195] Run: docker stop b03203302c1a 738892dde3c7 16888ddea83e 77c600612208 c214bc94f819 808a5a087b58 df9bb9ec4705 9be331844d55 28b08651520e 801f6529a899 7f4d7b5be87e c84c78d695a2 4f633e2a0faf 8e1ac841589d 160673dd833b 07423546a90a a4e2549e5881 4bbf923dfbce eedef76f65ef 582538b0952c 36e637819fb8 8f5d2c0dcc89
	I0723 07:34:45.771983    5099 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 07:34:45.864154    5099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 07:34:45.868638    5099 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 23 14:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 23 14:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 23 14:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 23 14:33 /etc/kubernetes/scheduler.conf
	
	I0723 07:34:45.868673    5099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/admin.conf
	I0723 07:34:45.872238    5099 kubeadm.go:163] "https://control-plane.minikube.internal:50345" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:34:45.872265    5099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 07:34:45.875563    5099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/kubelet.conf
	I0723 07:34:45.878379    5099 kubeadm.go:163] "https://control-plane.minikube.internal:50345" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:34:45.878412    5099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 07:34:45.881406    5099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/controller-manager.conf
	I0723 07:34:45.884370    5099 kubeadm.go:163] "https://control-plane.minikube.internal:50345" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:34:45.884395    5099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 07:34:45.887103    5099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/scheduler.conf
	I0723 07:34:45.890580    5099 kubeadm.go:163] "https://control-plane.minikube.internal:50345" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:34:45.890611    5099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 07:34:45.893632    5099 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 07:34:45.897226    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:45.930516    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:46.595529    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:46.801925    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:46.829663    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:46.852452    5099 api_server.go:52] waiting for apiserver process to appear ...
	I0723 07:34:46.852535    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:47.354744    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:47.854879    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:48.354889    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:48.854934    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:49.354848    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:49.854301    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:50.353145    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:50.854751    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:51.354637    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:51.854489    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:51.859076    5099 api_server.go:72] duration metric: took 5.006714917s to wait for apiserver process to appear ...
	I0723 07:34:51.859090    5099 api_server.go:88] waiting for apiserver healthz status ...
	I0723 07:34:51.859099    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:34:56.861496    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:34:56.861559    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:01.862030    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:01.862110    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:06.863488    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:06.863530    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:11.864736    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:11.864789    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:16.865926    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:16.865964    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:21.867415    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:21.867443    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:26.869263    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:26.869302    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:31.871482    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:31.871501    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:36.873588    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:36.873662    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:41.875915    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:41.876005    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:46.878281    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:46.878318    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:51.880472    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:51.880683    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:35:51.896976    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:35:51.897065    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:35:51.910917    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:35:51.910991    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:35:51.922142    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:35:51.922249    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:35:51.932729    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:35:51.932799    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:35:51.943436    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:35:51.943511    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:35:51.953954    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:35:51.954024    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:35:51.964226    5099 logs.go:276] 0 containers: []
	W0723 07:35:51.964236    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:35:51.964287    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:35:51.974804    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:35:51.974821    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:35:51.974827    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:35:52.015809    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:35:52.015821    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:35:52.028349    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:35:52.028360    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:35:52.043183    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:35:52.043194    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:35:52.068956    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:35:52.068964    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:35:52.080560    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:35:52.080570    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:35:52.085377    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:35:52.085387    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:35:52.099592    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:35:52.099601    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:35:52.120574    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:35:52.120587    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:35:52.132599    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:35:52.132609    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:35:52.148060    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:35:52.148070    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:35:52.189871    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:35:52.189882    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:35:52.204780    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:35:52.204795    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:35:52.216840    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:35:52.216850    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:35:52.228538    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:35:52.228551    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:35:52.302630    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:35:52.302643    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:35:52.317614    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:35:52.317628    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:35:52.328973    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:35:52.328988    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:35:52.346222    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:35:52.346232    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:35:54.858585    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:59.859168    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:59.859340    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:35:59.872736    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:35:59.872814    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:35:59.884546    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:35:59.884618    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:35:59.894838    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:35:59.894924    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:35:59.905109    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:35:59.905202    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:35:59.915797    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:35:59.915863    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:35:59.926630    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:35:59.926708    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:35:59.941137    5099 logs.go:276] 0 containers: []
	W0723 07:35:59.941149    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:35:59.941216    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:35:59.951590    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:35:59.951605    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:35:59.951610    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:35:59.977679    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:35:59.977689    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:00.014310    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:36:00.014322    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:36:00.028825    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:36:00.028835    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:36:00.040242    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:36:00.040251    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:36:00.052421    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:36:00.052433    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:36:00.064490    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:36:00.064506    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:36:00.101401    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:36:00.101413    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:36:00.112693    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:36:00.112704    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:36:00.124185    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:36:00.124198    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:36:00.141351    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:36:00.141361    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:00.154127    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:00.154139    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:00.158397    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:36:00.158404    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:36:00.169218    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:36:00.169231    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:36:00.181266    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:36:00.181276    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:36:00.199703    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:36:00.199713    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:36:00.211419    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:00.211430    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:00.252451    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:36:00.252462    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:36:00.266979    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:36:00.266992    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:36:02.782816    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:07.785070    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:07.785237    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:07.800470    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:36:07.800560    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:07.812445    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:36:07.812519    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:07.823462    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:36:07.823542    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:07.834153    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:36:07.834220    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:07.844882    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:36:07.844953    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:07.855651    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:36:07.855724    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:07.865937    5099 logs.go:276] 0 containers: []
	W0723 07:36:07.865948    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:07.866010    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:07.876710    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:36:07.876725    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:36:07.876730    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:36:07.896354    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:36:07.896367    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:36:07.917355    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:36:07.917367    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:36:07.929084    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:36:07.929096    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:36:07.946856    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:36:07.946869    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:36:07.960501    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:36:07.960509    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:36:07.972254    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:36:07.972266    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:36:07.989894    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:36:07.989907    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:36:08.001410    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:36:08.001421    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:08.014316    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:36:08.014330    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:36:08.028450    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:36:08.028461    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:36:08.039922    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:36:08.039934    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:36:08.054698    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:08.054707    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:08.093313    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:08.093321    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:08.098007    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:08.098014    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:08.134909    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:36:08.134918    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:36:08.150259    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:36:08.150271    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:36:08.191967    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:36:08.191979    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:36:08.202839    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:08.202851    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:10.732306    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:15.734546    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:15.734775    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:15.753284    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:36:15.753373    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:15.766879    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:36:15.766956    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:15.778724    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:36:15.778798    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:15.789547    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:36:15.789621    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:15.800378    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:36:15.800448    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:15.812573    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:36:15.812643    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:15.823417    5099 logs.go:276] 0 containers: []
	W0723 07:36:15.823430    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:15.823493    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:15.833786    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:36:15.833803    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:36:15.833808    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:36:15.845238    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:36:15.845250    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:36:15.863795    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:36:15.863806    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:15.876718    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:15.876728    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:15.916356    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:15.916366    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:15.920870    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:36:15.920876    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:36:15.932424    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:15.932435    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:15.960496    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:36:15.960507    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:36:15.974357    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:36:15.974367    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:36:15.989148    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:15.989160    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:16.024882    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:36:16.024896    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:36:16.036509    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:36:16.036521    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:36:16.051603    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:36:16.051616    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:36:16.064914    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:36:16.064925    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:36:16.077394    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:36:16.077408    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:36:16.089525    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:36:16.089536    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:36:16.101478    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:36:16.101496    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:36:16.116304    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:36:16.116314    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:36:16.153953    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:36:16.153969    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:36:18.671012    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:23.673466    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:23.673578    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:23.684844    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:36:23.684910    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:23.695702    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:36:23.695775    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:23.706242    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:36:23.706309    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:23.716990    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:36:23.717070    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:23.731814    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:36:23.731882    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:23.742380    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:36:23.742447    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:23.753339    5099 logs.go:276] 0 containers: []
	W0723 07:36:23.753352    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:23.753414    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:23.764193    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:36:23.764210    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:36:23.764215    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:36:23.778142    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:36:23.778156    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:36:23.789813    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:36:23.789824    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:36:23.801936    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:23.801947    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:23.839309    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:36:23.839317    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:36:23.878918    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:36:23.878930    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:36:23.896161    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:36:23.896171    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:36:23.910559    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:36:23.910571    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:36:23.924888    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:36:23.924900    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:23.936981    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:36:23.936992    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:36:23.951051    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:36:23.951063    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:36:23.965347    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:36:23.965358    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:36:23.976470    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:36:23.976483    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:36:23.989336    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:36:23.989347    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:36:24.001712    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:24.001722    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:24.027500    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:24.027508    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:24.031642    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:24.031650    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:24.067655    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:36:24.067665    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:36:24.079472    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:36:24.079482    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:36:26.592334    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:31.594697    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:31.594949    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:31.622492    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:36:31.622612    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:31.639237    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:36:31.639313    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:31.652458    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:36:31.652536    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:31.663784    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:36:31.663850    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:31.675239    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:36:31.675306    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:31.686009    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:36:31.686080    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:31.696314    5099 logs.go:276] 0 containers: []
	W0723 07:36:31.696325    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:31.696383    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:31.706888    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:36:31.706904    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:31.706911    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:31.711296    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:31.711305    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:31.735884    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:31.735891    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:31.774764    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:31.774771    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:31.811070    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:36:31.811080    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:36:31.825665    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:36:31.825678    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:36:31.837558    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:36:31.837569    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:36:31.852350    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:36:31.852361    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:36:31.863151    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:36:31.863163    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:36:31.884917    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:36:31.884929    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:36:31.899112    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:36:31.899122    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:36:31.915653    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:36:31.915665    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:36:31.926741    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:36:31.926752    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:36:31.938384    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:36:31.938400    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:36:31.950697    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:36:31.950706    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:36:31.962673    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:36:31.962682    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:36:31.999221    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:36:31.999232    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:36:32.010192    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:36:32.010204    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:36:32.021376    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:36:32.021388    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:34.534935    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:39.537293    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:39.537558    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:39.564560    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:36:39.564684    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:39.593236    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:36:39.593315    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:39.604456    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:36:39.604534    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:39.614827    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:36:39.614895    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:39.625766    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:36:39.625840    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:39.636641    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:36:39.636718    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:39.647390    5099 logs.go:276] 0 containers: []
	W0723 07:36:39.647403    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:39.647460    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:39.658126    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:36:39.658145    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:36:39.658150    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:36:39.676642    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:39.676653    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:39.681598    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:36:39.681607    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:36:39.718636    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:36:39.718649    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:36:39.730764    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:36:39.730774    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:36:39.743197    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:36:39.743209    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:36:39.755429    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:39.755439    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:39.793912    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:36:39.793924    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:36:39.811283    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:36:39.811296    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:36:39.826196    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:36:39.826206    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:36:39.838166    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:36:39.838176    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:39.850504    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:36:39.850517    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:36:39.867552    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:36:39.867566    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:36:39.878726    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:36:39.878739    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:36:39.890415    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:39.890430    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:39.916396    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:39.916406    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:39.953249    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:36:39.953257    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:36:39.967206    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:36:39.967217    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:36:39.980799    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:36:39.980812    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:36:42.493669    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:47.496168    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:47.496343    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:47.508702    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:36:47.508774    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:47.520310    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:36:47.520383    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:47.530570    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:36:47.530642    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:47.541133    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:36:47.541200    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:47.551397    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:36:47.551462    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:47.561958    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:36:47.562030    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:47.572090    5099 logs.go:276] 0 containers: []
	W0723 07:36:47.572106    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:47.572160    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:47.583123    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:36:47.583138    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:36:47.583143    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:36:47.623212    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:36:47.623222    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:36:47.639200    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:36:47.639213    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:36:47.651338    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:36:47.651348    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:36:47.662918    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:36:47.662931    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:36:47.680929    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:36:47.680940    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:36:47.693339    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:47.693350    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:47.731539    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:36:47.731557    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:36:47.756052    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:36:47.756063    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:36:47.767442    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:47.767452    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:47.793102    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:36:47.793112    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:36:47.805863    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:36:47.805875    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:36:47.825674    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:36:47.825684    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:36:47.840180    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:36:47.840189    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:36:47.851692    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:47.851704    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:47.856031    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:47.856038    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:47.892089    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:36:47.892101    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:36:47.906491    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:36:47.906502    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:36:47.920028    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:36:47.920038    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:50.434433    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:55.437116    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:55.437260    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:55.451329    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:36:55.451402    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:55.462858    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:36:55.462921    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:55.473583    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:36:55.473666    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:55.488043    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:36:55.488119    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:55.498951    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:36:55.499028    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:55.509813    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:36:55.509883    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:55.520284    5099 logs.go:276] 0 containers: []
	W0723 07:36:55.520296    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:55.520357    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:55.532426    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:36:55.532442    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:36:55.532449    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:36:55.544709    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:36:55.544719    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:36:55.559097    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:36:55.559106    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:36:55.570519    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:36:55.570533    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:36:55.582338    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:36:55.582347    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:36:55.594520    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:36:55.594530    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:36:55.611763    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:55.611772    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:55.647588    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:36:55.647596    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:36:55.661625    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:36:55.661634    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:36:55.706124    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:36:55.706139    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:36:55.723907    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:55.723921    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:55.728767    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:36:55.728777    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:36:55.743995    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:36:55.744005    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:36:55.755505    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:36:55.755520    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:55.768930    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:55.768946    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:55.809556    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:36:55.809565    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:36:55.821296    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:36:55.821308    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:36:55.833346    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:36:55.833357    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:36:55.844526    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:55.844537    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:58.371259    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:03.373597    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:03.373891    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:03.404180    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:37:03.404323    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:03.426226    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:37:03.426314    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:03.439734    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:37:03.439808    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:03.451121    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:37:03.451203    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:03.461511    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:37:03.461581    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:03.472296    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:37:03.472363    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:03.482366    5099 logs.go:276] 0 containers: []
	W0723 07:37:03.482379    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:03.482441    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:03.497020    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:37:03.497036    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:37:03.497041    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:37:03.512737    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:37:03.512747    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:37:03.524643    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:37:03.524657    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:37:03.536985    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:37:03.536995    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:37:03.551041    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:37:03.551051    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:37:03.565339    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:03.565349    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:03.600997    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:37:03.601006    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:37:03.613177    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:37:03.613188    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:37:03.630324    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:37:03.630333    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:37:03.641715    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:37:03.641730    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:37:03.653650    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:37:03.653660    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:37:03.669925    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:37:03.669940    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:37:03.706832    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:37:03.706843    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:37:03.725931    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:37:03.725947    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:37:03.745579    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:03.745593    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:03.769797    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:37:03.769806    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:03.781743    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:03.781752    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:03.786028    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:37:03.786034    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:37:03.797903    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:03.797916    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:06.337515    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:11.339781    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:11.340171    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:11.367806    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:37:11.367947    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:11.385919    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:37:11.386005    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:11.399385    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:37:11.399464    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:11.411085    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:37:11.411153    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:11.421431    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:37:11.421503    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:11.432323    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:37:11.432390    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:11.441952    5099 logs.go:276] 0 containers: []
	W0723 07:37:11.441963    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:11.442015    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:11.452298    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:37:11.452313    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:37:11.452319    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:11.464812    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:37:11.464823    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:37:11.478658    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:37:11.478668    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:37:11.517056    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:37:11.517067    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:37:11.528997    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:37:11.529008    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:37:11.547360    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:11.547369    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:11.552110    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:37:11.552116    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:37:11.568465    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:37:11.568475    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:37:11.580764    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:37:11.580796    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:37:11.598170    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:37:11.598180    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:37:11.611261    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:37:11.611276    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:37:11.626798    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:37:11.626811    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:37:11.639853    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:11.639864    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:11.667646    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:11.667660    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:11.707506    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:11.707514    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:11.746797    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:37:11.746809    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:37:11.760645    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:37:11.760657    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:37:11.776870    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:37:11.776883    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:37:11.793189    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:37:11.793202    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:37:14.305804    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:19.308016    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:19.308330    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:19.336674    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:37:19.336794    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:19.354635    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:37:19.354730    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:19.373470    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:37:19.373546    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:19.384382    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:37:19.384482    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:19.395067    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:37:19.395134    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:19.405674    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:37:19.405751    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:19.415400    5099 logs.go:276] 0 containers: []
	W0723 07:37:19.415410    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:19.415468    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:19.425893    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:37:19.425906    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:37:19.425911    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:37:19.463788    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:37:19.463802    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:37:19.477066    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:37:19.477076    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:37:19.489443    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:37:19.489455    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:37:19.505523    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:37:19.505533    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:37:19.516829    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:19.516840    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:19.551466    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:37:19.551481    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:37:19.566814    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:37:19.566828    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:37:19.578019    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:37:19.578028    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:19.590188    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:19.590202    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:19.627956    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:37:19.627964    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:37:19.641616    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:37:19.641631    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:37:19.654162    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:37:19.654173    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:37:19.667732    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:37:19.667745    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:37:19.679604    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:19.679618    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:19.706054    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:19.706061    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:19.710211    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:37:19.710217    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:37:19.721673    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:37:19.721684    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:37:19.746119    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:37:19.746133    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:37:22.262568    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:27.264714    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:27.264980    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:27.291537    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:37:27.291662    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:27.308841    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:37:27.308915    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:27.322115    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:37:27.322194    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:27.333635    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:37:27.333706    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:27.345386    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:37:27.345459    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:27.356026    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:37:27.356106    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:27.366644    5099 logs.go:276] 0 containers: []
	W0723 07:37:27.366655    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:27.366723    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:27.377310    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:37:27.377324    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:37:27.377329    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:37:27.413348    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:37:27.413359    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:37:27.424723    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:37:27.424736    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:37:27.439450    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:37:27.439460    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:37:27.452959    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:27.452970    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:27.477307    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:37:27.477313    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:37:27.492201    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:37:27.492212    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:37:27.509354    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:37:27.509364    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:37:27.521548    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:37:27.521558    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:27.533495    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:27.533510    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:27.579256    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:37:27.579269    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:37:27.594422    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:37:27.594435    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:37:27.605746    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:37:27.605758    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:37:27.617946    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:37:27.617955    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:37:27.629877    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:27.629886    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:27.667331    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:27.667341    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:27.671983    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:37:27.671990    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:37:27.686450    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:37:27.686461    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:37:27.705276    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:37:27.705287    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:37:30.218619    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:35.220767    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:35.221042    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:35.240284    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:37:35.240393    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:35.254483    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:37:35.254562    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:35.266661    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:37:35.266734    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:35.277403    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:37:35.277472    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:35.289129    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:37:35.289200    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:35.299565    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:37:35.299636    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:35.310158    5099 logs.go:276] 0 containers: []
	W0723 07:37:35.310169    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:35.310227    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:35.320376    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:37:35.320391    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:37:35.320396    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:37:35.334351    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:37:35.334360    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:37:35.352640    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:37:35.352652    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:37:35.364320    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:35.364330    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:35.402528    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:37:35.402547    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:37:35.440530    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:37:35.440541    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:37:35.454582    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:37:35.454595    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:37:35.481595    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:37:35.481605    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:37:35.498960    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:37:35.498971    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:37:35.517829    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:37:35.517841    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:37:35.536740    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:35.536749    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:35.561152    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:37:35.561159    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:37:35.574792    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:37:35.574802    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:37:35.590260    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:37:35.590271    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:37:35.603869    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:37:35.603880    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:37:35.616096    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:37:35.616111    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:37:35.628724    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:37:35.628736    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:35.640938    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:35.640948    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:35.645170    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:35.645177    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:38.183449    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:43.185586    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:43.185814    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:43.208272    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:37:43.208372    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:43.222433    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:37:43.222508    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:43.234593    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:37:43.234667    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:43.245275    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:37:43.245342    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:43.256676    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:37:43.256742    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:43.267693    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:37:43.267768    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:43.278558    5099 logs.go:276] 0 containers: []
	W0723 07:37:43.278570    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:43.278638    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:43.289804    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:37:43.289819    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:37:43.289825    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:37:43.304376    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:37:43.304386    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:37:43.317072    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:37:43.317086    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:37:43.334705    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:37:43.334716    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:37:43.348564    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:37:43.348572    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:37:43.359595    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:37:43.359607    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:37:43.371209    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:37:43.371220    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:37:43.383510    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:37:43.383542    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:43.395514    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:43.395524    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:43.432703    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:37:43.432713    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:37:43.469194    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:37:43.469205    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:37:43.480213    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:37:43.480225    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:37:43.494931    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:37:43.494942    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:37:43.506844    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:43.506854    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:43.511509    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:43.511516    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:43.546264    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:37:43.546278    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:37:43.560285    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:37:43.560298    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:37:43.572074    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:37:43.572087    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:37:43.583111    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:43.583123    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:46.108825    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:51.110960    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:51.111063    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:51.122637    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:37:51.122711    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:51.133755    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:37:51.133825    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:51.144596    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:37:51.144669    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:51.155813    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:37:51.155878    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:51.171813    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:37:51.171878    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:51.187481    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:37:51.187552    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:51.198127    5099 logs.go:276] 0 containers: []
	W0723 07:37:51.198139    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:51.198198    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:51.212058    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:37:51.212072    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:37:51.212078    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:37:51.226247    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:37:51.226257    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:37:51.243419    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:37:51.243431    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:37:51.260977    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:37:51.260988    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:37:51.275403    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:37:51.275415    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:37:51.286480    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:51.286492    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:51.323473    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:37:51.323486    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:37:51.334723    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:37:51.334734    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:37:51.347675    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:51.347684    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:51.370552    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:51.370560    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:51.409531    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:37:51.409538    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:37:51.446749    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:37:51.446761    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:37:51.461332    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:37:51.461345    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:37:51.476821    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:37:51.476832    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:37:51.498259    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:37:51.498271    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:51.510359    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:51.510372    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:51.515048    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:37:51.515054    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:37:51.529693    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:37:51.529704    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:37:51.545292    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:37:51.545303    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:37:54.059781    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:59.062062    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:59.062359    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:59.090745    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:37:59.090872    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:59.108763    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:37:59.108864    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:59.122993    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:37:59.123098    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:59.139616    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:37:59.139688    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:59.150274    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:37:59.150338    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:59.161236    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:37:59.161301    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:59.173501    5099 logs.go:276] 0 containers: []
	W0723 07:37:59.173512    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:59.173570    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:59.186583    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:37:59.186597    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:37:59.186602    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:37:59.201971    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:59.201981    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:59.206432    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:37:59.206439    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:37:59.217000    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:37:59.217011    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:37:59.235606    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:37:59.235620    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:37:59.248029    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:59.248043    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:59.284652    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:37:59.284666    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:37:59.298759    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:37:59.298773    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:37:59.311158    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:37:59.311171    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:37:59.322867    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:37:59.322882    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:37:59.337743    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:37:59.337757    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:37:59.349091    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:59.349104    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:59.372545    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:37:59.372554    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:59.384482    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:59.384496    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:59.421452    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:37:59.421460    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:37:59.437704    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:37:59.437718    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:37:59.449556    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:37:59.449565    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:37:59.490417    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:37:59.490429    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:37:59.508434    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:37:59.508444    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:38:02.030313    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:07.032481    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:07.032692    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:07.052217    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:38:07.052313    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:07.065656    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:38:07.065737    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:07.077781    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:38:07.077864    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:07.088897    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:38:07.088977    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:07.099406    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:38:07.099477    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:07.118672    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:38:07.118749    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:07.129217    5099 logs.go:276] 0 containers: []
	W0723 07:38:07.129232    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:07.129298    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:07.139965    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:38:07.139979    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:38:07.139984    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:38:07.154663    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:38:07.154677    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:38:07.168919    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:38:07.168930    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:38:07.188995    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:38:07.189007    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:38:07.200900    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:38:07.200911    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:38:07.219653    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:07.219676    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:07.224225    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:07.224235    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:07.258976    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:38:07.258988    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:38:07.296784    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:38:07.296797    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:38:07.309279    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:38:07.309290    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:38:07.321337    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:38:07.321351    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:38:07.338746    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:38:07.338757    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:38:07.354865    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:38:07.354874    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:38:07.365725    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:38:07.365738    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:38:07.376987    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:07.377002    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:07.415224    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:38:07.415234    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:38:07.426425    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:38:07.426436    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:38:07.437550    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:07.437560    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:07.460154    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:38:07.460161    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:09.973689    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:14.975849    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:14.975954    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:14.986603    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:38:14.986680    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:14.997326    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:38:14.997404    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:15.008274    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:38:15.008353    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:15.022765    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:38:15.022829    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:15.032841    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:38:15.032903    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:15.042844    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:38:15.042918    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:15.052486    5099 logs.go:276] 0 containers: []
	W0723 07:38:15.052497    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:15.052563    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:15.062990    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:38:15.063006    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:38:15.063011    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:38:15.074690    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:38:15.074702    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:38:15.089112    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:15.089121    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:15.112332    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:38:15.112341    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:38:15.123417    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:38:15.123428    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:38:15.134331    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:38:15.134342    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:38:15.154066    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:38:15.154075    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:38:15.171504    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:38:15.171514    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:38:15.182941    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:38:15.182953    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:38:15.197818    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:38:15.197831    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:38:15.212226    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:38:15.212239    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:38:15.223942    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:38:15.223953    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:38:15.238962    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:38:15.238973    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:38:15.253420    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:38:15.253433    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:15.266152    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:15.266163    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:15.270814    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:38:15.270820    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:38:15.284860    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:38:15.284875    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:38:15.323211    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:15.323220    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:15.361407    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:15.361415    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:17.899904    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:22.901587    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:22.901747    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:22.913310    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:38:22.913388    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:22.928487    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:38:22.928557    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:22.939475    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:38:22.939552    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:22.950198    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:38:22.950273    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:22.964788    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:38:22.964851    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:22.974928    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:38:22.974996    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:22.985231    5099 logs.go:276] 0 containers: []
	W0723 07:38:22.985242    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:22.985295    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:22.996870    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:38:22.996883    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:38:22.996888    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:38:23.010539    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:38:23.010550    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:38:23.022697    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:23.022711    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:23.046630    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:38:23.046637    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:23.058663    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:38:23.058677    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:38:23.072920    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:38:23.072929    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:38:23.084620    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:38:23.084635    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:38:23.103135    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:38:23.103151    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:38:23.114410    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:38:23.114422    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:38:23.129229    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:38:23.129239    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:38:23.149363    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:23.149377    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:23.153947    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:23.153956    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:23.190393    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:38:23.190406    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:38:23.202657    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:38:23.202669    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:38:23.219342    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:38:23.219352    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:38:23.237234    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:23.237245    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:23.275497    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:38:23.275511    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:38:23.314582    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:38:23.314594    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:38:23.328686    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:38:23.328696    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:38:25.842790    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:30.844941    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:30.845118    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:30.859979    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:38:30.860055    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:30.875150    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:38:30.875219    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:30.885359    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:38:30.885430    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:30.898646    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:38:30.898711    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:30.909206    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:38:30.909269    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:30.919101    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:38:30.919170    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:30.929418    5099 logs.go:276] 0 containers: []
	W0723 07:38:30.929429    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:30.929489    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:30.943694    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:38:30.943709    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:38:30.943714    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:38:30.961624    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:38:30.961635    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:30.978621    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:30.978632    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:30.983455    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:30.983468    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:31.018320    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:38:31.018332    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:38:31.030548    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:38:31.030559    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:38:31.042296    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:38:31.042305    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:38:31.053252    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:31.053263    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:31.092296    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:38:31.092309    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:38:31.129188    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:38:31.129198    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:38:31.140314    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:38:31.140327    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:38:31.151901    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:31.151911    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:31.174718    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:38:31.174725    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:38:31.188561    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:38:31.188572    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:38:31.202285    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:38:31.202295    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:38:31.216425    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:38:31.216434    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:38:31.227512    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:38:31.227524    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:38:31.241898    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:38:31.241908    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:38:31.252915    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:38:31.252926    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:38:33.767101    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:38.769192    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:38.769319    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:38.783693    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:38:38.783775    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:38.799529    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:38:38.799601    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:38.810671    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:38:38.810743    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:38.821299    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:38:38.821369    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:38.832035    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:38:38.832109    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:38.843334    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:38:38.843407    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:38.853841    5099 logs.go:276] 0 containers: []
	W0723 07:38:38.853854    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:38.853920    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:38.864528    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:38:38.864545    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:38:38.864550    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:38:38.878698    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:38:38.878714    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:38:38.894957    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:38.894967    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:38.938114    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:38:38.938129    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:38:38.977280    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:38:38.977292    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:38:38.989453    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:38.989464    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:39.026664    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:38:39.026675    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:38:39.041745    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:38:39.041770    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:39.054191    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:38:39.054208    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:38:39.065521    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:38:39.065533    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:38:39.077495    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:38:39.077508    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:38:39.094900    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:38:39.094910    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:38:39.105844    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:39.105855    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:39.110160    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:38:39.110167    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:38:39.126556    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:38:39.126567    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:38:39.141650    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:39.141661    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:39.163212    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:38:39.163219    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:38:39.178400    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:38:39.178410    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:38:39.190249    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:38:39.190259    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:38:41.703565    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:46.705007    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:46.705215    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:46.725171    5099 logs.go:276] 2 containers: [aee96da52a13 df9bb9ec4705]
	I0723 07:38:46.725279    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:46.739051    5099 logs.go:276] 2 containers: [436b7f3dcd49 7f4d7b5be87e]
	I0723 07:38:46.739129    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:46.751581    5099 logs.go:276] 2 containers: [f3dfab5a09bb 07423546a90a]
	I0723 07:38:46.751662    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:46.762558    5099 logs.go:276] 2 containers: [dcc59a34a106 16888ddea83e]
	I0723 07:38:46.762639    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:46.772987    5099 logs.go:276] 2 containers: [c6628f8d978d 9be331844d55]
	I0723 07:38:46.773071    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:46.784257    5099 logs.go:276] 2 containers: [7ffe4b99b177 c84c78d695a2]
	I0723 07:38:46.784337    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:46.794885    5099 logs.go:276] 0 containers: []
	W0723 07:38:46.794897    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:46.794958    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:46.805606    5099 logs.go:276] 2 containers: [5b1525d40191 c214bc94f819]
	I0723 07:38:46.805622    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:38:46.805628    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:46.818084    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:46.818096    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:46.857645    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:46.857655    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:46.896016    5099 logs.go:123] Gathering logs for kube-apiserver [aee96da52a13] ...
	I0723 07:38:46.896028    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee96da52a13"
	I0723 07:38:46.910152    5099 logs.go:123] Gathering logs for kube-apiserver [df9bb9ec4705] ...
	I0723 07:38:46.910162    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df9bb9ec4705"
	I0723 07:38:46.947651    5099 logs.go:123] Gathering logs for etcd [436b7f3dcd49] ...
	I0723 07:38:46.947663    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 436b7f3dcd49"
	I0723 07:38:46.963282    5099 logs.go:123] Gathering logs for storage-provisioner [c214bc94f819] ...
	I0723 07:38:46.963292    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c214bc94f819"
	I0723 07:38:46.974923    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:46.974934    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:46.997820    5099 logs.go:123] Gathering logs for etcd [7f4d7b5be87e] ...
	I0723 07:38:46.997827    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f4d7b5be87e"
	I0723 07:38:47.016684    5099 logs.go:123] Gathering logs for kube-scheduler [dcc59a34a106] ...
	I0723 07:38:47.016695    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc59a34a106"
	I0723 07:38:47.028843    5099 logs.go:123] Gathering logs for kube-scheduler [16888ddea83e] ...
	I0723 07:38:47.028852    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16888ddea83e"
	I0723 07:38:47.040741    5099 logs.go:123] Gathering logs for kube-proxy [9be331844d55] ...
	I0723 07:38:47.040753    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be331844d55"
	I0723 07:38:47.052458    5099 logs.go:123] Gathering logs for kube-controller-manager [c84c78d695a2] ...
	I0723 07:38:47.052469    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84c78d695a2"
	I0723 07:38:47.066998    5099 logs.go:123] Gathering logs for kube-proxy [c6628f8d978d] ...
	I0723 07:38:47.067009    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6628f8d978d"
	I0723 07:38:47.078572    5099 logs.go:123] Gathering logs for kube-controller-manager [7ffe4b99b177] ...
	I0723 07:38:47.078583    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffe4b99b177"
	I0723 07:38:47.095882    5099 logs.go:123] Gathering logs for storage-provisioner [5b1525d40191] ...
	I0723 07:38:47.095891    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1525d40191"
	I0723 07:38:47.110901    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:47.110911    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:47.115474    5099 logs.go:123] Gathering logs for coredns [f3dfab5a09bb] ...
	I0723 07:38:47.115482    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3dfab5a09bb"
	I0723 07:38:47.127424    5099 logs.go:123] Gathering logs for coredns [07423546a90a] ...
	I0723 07:38:47.127435    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07423546a90a"
	I0723 07:38:49.648731    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:54.651218    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:54.651254    5099 kubeadm.go:597] duration metric: took 4m8.91792475s to restartPrimaryControlPlane
	W0723 07:38:54.651288    5099 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 07:38:54.651303    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0723 07:38:55.792047    5099 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.140753208s)
	I0723 07:38:55.792110    5099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 07:38:55.797154    5099 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 07:38:55.800092    5099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 07:38:55.802998    5099 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 07:38:55.803006    5099 kubeadm.go:157] found existing configuration files:
	
	I0723 07:38:55.803029    5099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/admin.conf
	I0723 07:38:55.805539    5099 kubeadm.go:163] "https://control-plane.minikube.internal:50345" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 07:38:55.805562    5099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 07:38:55.808390    5099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/kubelet.conf
	I0723 07:38:55.811379    5099 kubeadm.go:163] "https://control-plane.minikube.internal:50345" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 07:38:55.811405    5099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 07:38:55.813908    5099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/controller-manager.conf
	I0723 07:38:55.816636    5099 kubeadm.go:163] "https://control-plane.minikube.internal:50345" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 07:38:55.816664    5099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 07:38:55.819692    5099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/scheduler.conf
	I0723 07:38:55.822277    5099 kubeadm.go:163] "https://control-plane.minikube.internal:50345" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50345 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 07:38:55.822296    5099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 07:38:55.825424    5099 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 07:38:55.843180    5099 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0723 07:38:55.843208    5099 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 07:38:55.889480    5099 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 07:38:55.889539    5099 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 07:38:55.889607    5099 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 07:38:55.938736    5099 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 07:38:55.946400    5099 out.go:204]   - Generating certificates and keys ...
	I0723 07:38:55.946461    5099 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 07:38:55.946570    5099 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 07:38:55.946625    5099 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 07:38:55.946659    5099 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 07:38:55.946765    5099 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 07:38:55.946880    5099 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 07:38:55.946912    5099 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 07:38:55.946947    5099 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 07:38:55.946986    5099 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 07:38:55.947023    5099 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 07:38:55.947088    5099 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 07:38:55.947125    5099 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 07:38:56.115464    5099 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 07:38:56.396262    5099 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 07:38:56.500095    5099 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 07:38:56.545630    5099 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 07:38:56.575332    5099 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 07:38:56.575940    5099 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 07:38:56.575961    5099 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 07:38:56.663750    5099 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 07:38:56.666426    5099 out.go:204]   - Booting up control plane ...
	I0723 07:38:56.666473    5099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 07:38:56.666513    5099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 07:38:56.666556    5099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 07:38:56.666603    5099 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 07:38:56.670050    5099 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 07:39:00.676714    5099 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.006832 seconds
	I0723 07:39:00.676802    5099 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 07:39:00.680493    5099 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 07:39:01.195472    5099 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 07:39:01.195773    5099 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-350000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 07:39:01.699039    5099 kubeadm.go:310] [bootstrap-token] Using token: 3cxi4o.i7032h0wcnpp91yk
	I0723 07:39:01.705411    5099 out.go:204]   - Configuring RBAC rules ...
	I0723 07:39:01.705473    5099 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 07:39:01.705520    5099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 07:39:01.709959    5099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 07:39:01.710777    5099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 07:39:01.711532    5099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 07:39:01.712504    5099 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 07:39:01.715617    5099 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 07:39:01.892075    5099 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 07:39:02.103672    5099 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 07:39:02.104095    5099 kubeadm.go:310] 
	I0723 07:39:02.104130    5099 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 07:39:02.104135    5099 kubeadm.go:310] 
	I0723 07:39:02.104183    5099 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 07:39:02.104187    5099 kubeadm.go:310] 
	I0723 07:39:02.104199    5099 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 07:39:02.104249    5099 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 07:39:02.104280    5099 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 07:39:02.104283    5099 kubeadm.go:310] 
	I0723 07:39:02.104305    5099 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 07:39:02.104307    5099 kubeadm.go:310] 
	I0723 07:39:02.104330    5099 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 07:39:02.104334    5099 kubeadm.go:310] 
	I0723 07:39:02.104356    5099 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 07:39:02.104398    5099 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 07:39:02.104438    5099 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 07:39:02.104442    5099 kubeadm.go:310] 
	I0723 07:39:02.104486    5099 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 07:39:02.104527    5099 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 07:39:02.104530    5099 kubeadm.go:310] 
	I0723 07:39:02.104580    5099 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3cxi4o.i7032h0wcnpp91yk \
	I0723 07:39:02.104633    5099 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:29adbcbc0a6bf2a081f567e258fc4ee09254f17c26f802d72ace65c98bb575cd \
	I0723 07:39:02.104644    5099 kubeadm.go:310] 	--control-plane 
	I0723 07:39:02.104647    5099 kubeadm.go:310] 
	I0723 07:39:02.104696    5099 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 07:39:02.104700    5099 kubeadm.go:310] 
	I0723 07:39:02.104736    5099 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3cxi4o.i7032h0wcnpp91yk \
	I0723 07:39:02.104781    5099 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:29adbcbc0a6bf2a081f567e258fc4ee09254f17c26f802d72ace65c98bb575cd 
	I0723 07:39:02.104836    5099 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 07:39:02.104847    5099 cni.go:84] Creating CNI manager for ""
	I0723 07:39:02.104856    5099 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:39:02.107756    5099 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 07:39:02.114708    5099 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 07:39:02.117522    5099 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 07:39:02.127890    5099 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 07:39:02.127950    5099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 07:39:02.127977    5099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-350000 minikube.k8s.io/updated_at=2024_07_23T07_39_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=running-upgrade-350000 minikube.k8s.io/primary=true
	I0723 07:39:02.167260    5099 ops.go:34] apiserver oom_adj: -16
	I0723 07:39:02.167332    5099 kubeadm.go:1113] duration metric: took 39.434459ms to wait for elevateKubeSystemPrivileges
	I0723 07:39:02.170758    5099 kubeadm.go:394] duration metric: took 4m16.453028791s to StartCluster
	I0723 07:39:02.170772    5099 settings.go:142] acquiring lock: {Name:mkd8f4c38e79948dfc5500ad891e72aa4257d24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:39:02.170853    5099 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:39:02.171272    5099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/kubeconfig: {Name:mkd61b3eb94b798a54b8f29057406aee7268d37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:39:02.171491    5099 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:39:02.171507    5099 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 07:39:02.171539    5099 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-350000"
	I0723 07:39:02.171565    5099 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-350000"
	W0723 07:39:02.171569    5099 addons.go:243] addon storage-provisioner should already be in state true
	I0723 07:39:02.171571    5099 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-350000"
	I0723 07:39:02.171580    5099 host.go:66] Checking if "running-upgrade-350000" exists ...
	I0723 07:39:02.171582    5099 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-350000"
	I0723 07:39:02.171579    5099 config.go:182] Loaded profile config "running-upgrade-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0723 07:39:02.172434    5099 kapi.go:59] client config for running-upgrade-350000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/running-upgrade-350000/client.key", CAFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f03fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0723 07:39:02.172554    5099 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-350000"
	W0723 07:39:02.172559    5099 addons.go:243] addon default-storageclass should already be in state true
	I0723 07:39:02.172566    5099 host.go:66] Checking if "running-upgrade-350000" exists ...
	I0723 07:39:02.175753    5099 out.go:177] * Verifying Kubernetes components...
	I0723 07:39:02.176055    5099 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 07:39:02.179913    5099 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 07:39:02.179921    5099 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/running-upgrade-350000/id_rsa Username:docker}
	I0723 07:39:02.183625    5099 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:39:02.187737    5099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:39:02.191679    5099 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 07:39:02.191686    5099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 07:39:02.191693    5099 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/running-upgrade-350000/id_rsa Username:docker}
	I0723 07:39:02.285047    5099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 07:39:02.290623    5099 api_server.go:52] waiting for apiserver process to appear ...
	I0723 07:39:02.290668    5099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:39:02.294407    5099 api_server.go:72] duration metric: took 122.907334ms to wait for apiserver process to appear ...
	I0723 07:39:02.294415    5099 api_server.go:88] waiting for apiserver healthz status ...
	I0723 07:39:02.294421    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:02.315384    5099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 07:39:02.328525    5099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 07:39:07.296422    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:07.296479    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:12.296735    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:12.296767    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:17.297078    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:17.297140    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:22.297609    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:22.297656    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:27.298331    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:27.298361    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:32.299117    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:32.299161    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0723 07:39:32.673799    5099 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0723 07:39:32.677234    5099 out.go:177] * Enabled addons: storage-provisioner
	I0723 07:39:32.688059    5099 addons.go:510] duration metric: took 30.517106083s for enable addons: enabled=[storage-provisioner]
	I0723 07:39:37.300450    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:37.300490    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:42.300200    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:42.300223    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:47.300207    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:47.300256    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:52.300001    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:52.300025    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:57.301231    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:57.301275    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:02.302791    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:02.302913    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:02.315950    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:40:02.316016    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:02.327411    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:40:02.327481    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:02.337807    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:40:02.337885    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:02.348120    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:40:02.348195    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:02.358398    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:40:02.358471    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:02.368642    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:40:02.368707    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:02.378490    5099 logs.go:276] 0 containers: []
	W0723 07:40:02.378501    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:02.378556    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:02.388978    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:40:02.388996    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:40:02.389001    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:40:02.400813    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:40:02.400824    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:40:02.412234    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:40:02.412245    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:40:02.423738    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:40:02.423749    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:40:02.441193    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:40:02.441205    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:40:02.453353    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:02.453363    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:02.478094    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:02.478105    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:02.513795    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:02.513804    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:02.518640    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:40:02.518647    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:02.529939    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:40:02.529950    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:40:02.546162    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:40:02.546173    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:40:02.560889    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:02.560899    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:02.598758    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:40:02.598770    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:40:05.115732    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:10.117483    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:10.117656    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:10.134461    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:40:10.134547    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:10.147263    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:40:10.147327    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:10.158206    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:40:10.158276    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:10.169534    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:40:10.169606    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:10.180308    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:40:10.180379    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:10.190973    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:40:10.191042    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:10.201348    5099 logs.go:276] 0 containers: []
	W0723 07:40:10.201359    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:10.201414    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:10.218735    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:40:10.218750    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:40:10.218756    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:40:10.229981    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:10.229992    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:10.253343    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:40:10.253351    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:10.264899    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:10.264910    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:10.299907    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:40:10.299919    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:40:10.314446    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:40:10.314456    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:40:10.328441    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:40:10.328450    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:40:10.345729    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:40:10.345739    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:40:10.361812    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:40:10.361822    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:40:10.373353    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:10.373363    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:10.407444    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:10.407454    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:10.411950    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:40:10.411957    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:40:10.423202    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:40:10.423214    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:40:12.939472    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:17.941398    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:17.941632    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:17.969871    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:40:17.969968    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:17.982818    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:40:17.982889    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:17.995026    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:40:17.995098    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:18.005941    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:40:18.006013    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:18.016465    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:40:18.016542    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:18.026770    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:40:18.026842    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:18.038304    5099 logs.go:276] 0 containers: []
	W0723 07:40:18.038321    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:18.038383    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:18.049235    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:40:18.049248    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:40:18.049254    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:40:18.060874    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:40:18.060886    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:40:18.076107    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:40:18.076121    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:40:18.093331    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:18.093342    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:18.097931    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:18.097940    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:18.139278    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:40:18.139291    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:40:18.153179    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:40:18.153190    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:40:18.164466    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:40:18.164478    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:40:18.179638    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:40:18.179652    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:40:18.191338    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:18.191351    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:18.215760    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:40:18.215769    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:18.227348    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:18.227359    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:18.263348    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:40:18.263360    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:40:20.779386    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:25.781427    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:25.781661    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:25.799928    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:40:25.800030    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:25.813848    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:40:25.813924    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:25.826017    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:40:25.826087    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:25.842415    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:40:25.842486    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:25.852689    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:40:25.852765    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:25.863681    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:40:25.863745    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:25.873770    5099 logs.go:276] 0 containers: []
	W0723 07:40:25.873782    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:25.873844    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:25.884100    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:40:25.884113    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:25.884118    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:25.920380    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:25.920395    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:25.925087    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:25.925097    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:25.959870    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:40:25.959881    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:40:25.977858    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:40:25.977867    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:40:25.992753    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:40:25.992764    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:40:26.004977    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:40:26.004989    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:40:26.029948    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:40:26.029958    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:40:26.044428    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:40:26.044438    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:40:26.058230    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:40:26.058241    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:40:26.070313    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:40:26.070324    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:40:26.081385    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:26.081396    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:26.104616    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:40:26.104623    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:28.617782    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:33.618096    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:33.618307    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:33.642260    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:40:33.642382    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:33.658421    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:40:33.658516    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:33.672117    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:40:33.672197    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:33.683818    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:40:33.683887    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:33.694279    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:40:33.694348    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:33.705055    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:40:33.705121    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:33.718640    5099 logs.go:276] 0 containers: []
	W0723 07:40:33.718650    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:33.718706    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:33.729128    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:40:33.729142    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:40:33.729147    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:40:33.742660    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:40:33.742669    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:40:33.754862    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:40:33.754869    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:40:33.770570    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:40:33.770580    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:33.782323    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:33.782331    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:33.816133    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:33.816141    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:33.851438    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:40:33.851450    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:40:33.872307    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:40:33.872319    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:40:33.891916    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:40:33.891927    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:40:33.904208    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:33.904219    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:33.927989    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:33.927998    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:33.932983    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:40:33.932992    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:40:33.944943    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:40:33.944956    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:40:36.463837    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:41.465918    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:41.466152    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:41.491412    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:40:41.491520    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:41.507692    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:40:41.507772    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:41.520051    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:40:41.520125    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:41.531512    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:40:41.531574    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:41.544319    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:40:41.544383    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:41.554407    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:40:41.554475    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:41.564606    5099 logs.go:276] 0 containers: []
	W0723 07:40:41.564615    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:41.564667    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:41.575023    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:40:41.575038    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:40:41.575044    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:40:41.588943    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:40:41.588955    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:40:41.600761    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:40:41.600776    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:40:41.613429    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:40:41.613439    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:41.625385    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:41.625400    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:41.629906    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:40:41.629912    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:40:41.645026    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:40:41.645037    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:40:41.661335    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:40:41.661345    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:40:41.673740    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:40:41.673749    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:40:41.691044    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:40:41.691054    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:40:41.702423    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:41.702433    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:41.726786    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:41.726795    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:41.761308    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:41.761316    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:44.298533    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:49.300750    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:49.300954    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:49.323556    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:40:49.323682    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:49.341856    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:40:49.341949    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:49.357503    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:40:49.357584    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:49.370880    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:40:49.370944    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:49.381871    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:40:49.381939    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:49.392618    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:40:49.392678    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:49.402857    5099 logs.go:276] 0 containers: []
	W0723 07:40:49.402869    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:49.402926    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:49.413246    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:40:49.413262    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:49.413267    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:49.437370    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:49.437381    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:49.441982    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:40:49.441992    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:40:49.456104    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:40:49.456114    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:40:49.469722    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:40:49.469732    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:40:49.481433    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:40:49.481444    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:40:49.496065    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:40:49.496079    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:40:49.507938    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:40:49.507949    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:49.519045    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:49.519057    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:49.553245    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:49.553253    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:49.590738    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:40:49.590754    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:40:49.605042    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:40:49.605056    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:40:49.621194    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:40:49.621206    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:40:52.142000    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:57.144093    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:57.144335    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:57.159875    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:40:57.159966    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:57.172163    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:40:57.172232    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:57.183264    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:40:57.183350    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:57.193856    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:40:57.193929    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:57.205240    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:40:57.205310    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:57.216823    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:40:57.216894    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:57.227141    5099 logs.go:276] 0 containers: []
	W0723 07:40:57.227153    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:57.227221    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:57.238067    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:40:57.238084    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:40:57.238088    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:40:57.252434    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:40:57.252444    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:40:57.263806    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:40:57.263815    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:40:57.275757    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:40:57.275766    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:40:57.287380    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:40:57.287393    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:40:57.304774    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:57.304784    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:57.339751    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:57.339759    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:57.344294    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:40:57.344302    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:40:57.364327    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:40:57.364337    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:40:57.375206    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:57.375216    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:57.398389    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:57.398396    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:57.439559    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:40:57.439571    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:40:57.454741    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:40:57.454751    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:59.968523    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:04.970736    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:04.970884    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:04.985061    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:41:04.985139    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:04.996469    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:41:04.996539    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:05.007046    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:41:05.007119    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:05.021957    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:41:05.022022    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:05.032055    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:41:05.032118    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:05.043166    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:41:05.043241    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:05.053866    5099 logs.go:276] 0 containers: []
	W0723 07:41:05.053878    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:05.053935    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:05.064435    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:41:05.064449    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:41:05.064454    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:41:05.077342    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:05.077353    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:05.102289    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:41:05.102296    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:05.113679    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:05.113690    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:05.150121    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:41:05.150132    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:41:05.161794    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:41:05.161806    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:41:05.176361    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:41:05.176371    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:41:05.191757    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:41:05.191768    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:41:05.203522    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:41:05.203533    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:41:05.221443    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:41:05.221457    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:41:05.232997    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:05.233007    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:05.269101    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:05.269128    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:05.274180    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:41:05.274189    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:41:07.790905    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:12.792308    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:12.792527    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:12.816038    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:41:12.816139    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:12.832617    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:41:12.832685    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:12.848824    5099 logs.go:276] 2 containers: [34decae0ac07 5b1d3e997c2c]
	I0723 07:41:12.848898    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:12.859727    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:41:12.859800    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:12.870026    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:41:12.870091    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:12.884249    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:41:12.884323    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:12.894453    5099 logs.go:276] 0 containers: []
	W0723 07:41:12.894464    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:12.894522    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:12.905708    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:41:12.905725    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:12.905730    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:12.910367    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:12.910378    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:12.945004    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:41:12.945018    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:41:12.956535    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:41:12.956545    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:41:12.968406    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:12.968416    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:12.993460    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:41:12.993468    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:41:13.010717    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:41:13.010728    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:13.022293    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:13.022303    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:13.057065    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:41:13.057073    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:41:13.072159    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:41:13.072170    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:41:13.085953    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:41:13.085963    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:41:13.098953    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:41:13.098964    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:41:13.114036    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:41:13.114052    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:41:15.628886    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:20.631079    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:20.631184    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:20.641784    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:41:20.641844    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:20.652326    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:41:20.652397    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:20.666243    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:41:20.666313    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:20.676578    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:41:20.676646    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:20.686988    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:41:20.687055    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:20.699847    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:41:20.699909    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:20.711588    5099 logs.go:276] 0 containers: []
	W0723 07:41:20.711598    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:20.711648    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:20.721874    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:41:20.721893    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:20.721898    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:20.726551    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:41:20.726561    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:41:20.744365    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:41:20.744375    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:41:20.755167    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:41:20.755178    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:41:20.767172    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:20.767182    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:20.792073    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:20.792081    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:20.826881    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:41:20.826892    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:41:20.841094    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:41:20.841104    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:41:20.852935    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:41:20.852945    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:41:20.874933    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:41:20.874943    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:41:20.886900    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:20.886910    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:20.922336    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:41:20.922349    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:41:20.936795    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:41:20.936808    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:41:20.948956    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:41:20.948968    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:41:20.963559    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:41:20.963574    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:23.477764    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:28.480018    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:28.480138    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:28.501223    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:41:28.501291    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:28.511643    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:41:28.511714    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:28.522259    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:41:28.522324    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:28.532536    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:41:28.532598    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:28.555083    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:41:28.555159    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:28.571113    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:41:28.571183    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:28.582926    5099 logs.go:276] 0 containers: []
	W0723 07:41:28.582936    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:28.582999    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:28.595451    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:41:28.595467    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:28.595472    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:28.631216    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:28.631226    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:28.667090    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:41:28.667100    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:41:28.678650    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:28.678664    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:28.683031    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:41:28.683040    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:41:28.694708    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:41:28.694718    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:41:28.716846    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:41:28.716857    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:41:28.728526    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:28.728538    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:28.753259    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:41:28.753272    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:28.765125    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:41:28.765135    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:41:28.776635    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:41:28.776645    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:41:28.788675    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:41:28.788685    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:41:28.801889    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:41:28.801901    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:41:28.817244    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:41:28.817257    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:41:28.831690    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:41:28.831700    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:41:31.348781    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:36.351032    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:36.351269    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:36.375185    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:41:36.375296    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:36.396117    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:41:36.396182    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:36.408742    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:41:36.408815    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:36.419935    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:41:36.420000    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:36.431556    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:41:36.431626    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:36.449981    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:41:36.450047    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:36.465607    5099 logs.go:276] 0 containers: []
	W0723 07:41:36.465617    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:36.465673    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:36.476318    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:41:36.476335    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:41:36.476340    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:41:36.491399    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:41:36.491413    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:41:36.502854    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:41:36.502863    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:41:36.515063    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:41:36.515073    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:41:36.530441    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:41:36.530452    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:41:36.541841    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:41:36.541853    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:41:36.553695    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:41:36.553706    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:41:36.565377    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:36.565387    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:36.598508    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:36.598516    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:36.602766    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:36.602772    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:36.638193    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:41:36.638203    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:41:36.650125    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:41:36.650135    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:41:36.671988    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:41:36.671997    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:36.683763    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:41:36.683774    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:41:36.698169    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:36.698179    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:39.223185    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:44.225412    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:44.225657    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:44.249558    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:41:44.249662    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:44.264125    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:41:44.264193    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:44.281493    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:41:44.281569    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:44.292521    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:41:44.292595    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:44.302985    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:41:44.303055    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:44.313522    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:41:44.313585    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:44.323579    5099 logs.go:276] 0 containers: []
	W0723 07:41:44.323594    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:44.323648    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:44.333969    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:41:44.333987    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:41:44.333992    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:41:44.352285    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:41:44.352295    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:44.364086    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:44.364096    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:44.398500    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:44.398509    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:44.433190    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:41:44.433204    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:41:44.448088    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:41:44.448099    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:41:44.459690    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:41:44.459702    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:41:44.478120    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:44.478129    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:44.503257    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:44.503266    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:44.507964    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:41:44.507973    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:41:44.522530    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:41:44.522539    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:41:44.534398    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:41:44.534408    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:41:44.549041    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:41:44.549051    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:41:44.560912    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:41:44.560926    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:41:44.576041    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:41:44.576051    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:41:47.089739    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:52.092005    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:52.092129    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:52.103369    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:41:52.103455    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:52.114445    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:41:52.114520    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:52.129101    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:41:52.129177    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:52.141723    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:41:52.141796    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:52.152670    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:41:52.152744    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:52.163306    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:41:52.163375    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:52.173694    5099 logs.go:276] 0 containers: []
	W0723 07:41:52.173706    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:52.173765    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:52.185645    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:41:52.185663    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:41:52.185668    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:41:52.202219    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:41:52.202231    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:41:52.214070    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:52.214079    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:52.251863    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:52.251875    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:52.256840    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:41:52.256848    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:41:52.270567    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:41:52.270579    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:41:52.282343    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:41:52.282353    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:41:52.293867    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:41:52.293882    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:41:52.306358    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:41:52.306369    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:41:52.326780    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:52.326791    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:52.352031    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:52.352043    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:52.387101    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:41:52.387114    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:41:52.401182    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:41:52.401193    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:41:52.413349    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:41:52.413361    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:41:52.431732    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:41:52.431743    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:54.945554    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:59.947735    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:59.947933    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:59.980941    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:41:59.981043    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:59.998462    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:41:59.998549    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:00.015523    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:42:00.015597    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:00.027458    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:42:00.027531    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:00.049558    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:42:00.049630    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:00.059986    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:42:00.060059    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:00.070690    5099 logs.go:276] 0 containers: []
	W0723 07:42:00.070701    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:00.070761    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:00.081358    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:42:00.081374    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:42:00.081379    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:42:00.098055    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:00.098069    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:00.123653    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:00.123664    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:00.159206    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:42:00.159216    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:42:00.173781    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:42:00.173791    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:42:00.185632    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:42:00.185645    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:42:00.200294    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:42:00.200304    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:42:00.222407    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:42:00.222418    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:00.234625    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:00.234635    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:00.270299    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:42:00.270310    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:42:00.284927    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:42:00.284938    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:42:00.296385    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:42:00.296395    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:42:00.307619    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:00.307629    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:00.312641    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:42:00.312648    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:42:00.324642    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:42:00.324655    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:42:02.838609    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:07.840797    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:07.841040    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:07.857356    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:42:07.857447    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:07.869830    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:42:07.869901    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:07.880997    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:42:07.881065    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:07.892122    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:42:07.892186    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:07.906007    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:42:07.906069    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:07.916189    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:42:07.916255    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:07.926590    5099 logs.go:276] 0 containers: []
	W0723 07:42:07.926602    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:07.926664    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:07.936658    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:42:07.936673    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:42:07.936680    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:42:07.955304    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:42:07.955315    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:42:07.979926    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:42:07.979935    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:42:07.991427    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:07.991435    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:08.024672    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:08.024682    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:08.059198    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:42:08.059210    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:42:08.074504    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:42:08.074513    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:42:08.086068    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:42:08.086078    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:42:08.097794    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:42:08.097804    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:42:08.109306    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:42:08.109316    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:08.120857    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:08.120868    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:08.125277    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:42:08.125284    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:42:08.140138    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:42:08.140148    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:42:08.153520    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:08.153530    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:08.176578    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:42:08.176586    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:42:10.690150    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:15.692362    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:15.692567    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:15.716022    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:42:15.716111    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:15.729558    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:42:15.729626    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:15.741560    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:42:15.741643    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:15.752356    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:42:15.752416    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:15.762837    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:42:15.762902    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:15.773302    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:42:15.773369    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:15.783269    5099 logs.go:276] 0 containers: []
	W0723 07:42:15.783278    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:15.783331    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:15.798690    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:42:15.798707    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:42:15.798712    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:42:15.810855    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:42:15.810867    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:42:15.826175    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:42:15.826188    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:42:15.837979    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:15.837991    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:15.871655    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:15.871664    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:15.876152    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:42:15.876159    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:42:15.890676    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:42:15.890689    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:42:15.904654    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:42:15.904665    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:42:15.916089    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:42:15.916103    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:42:15.927771    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:15.927783    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:15.952323    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:42:15.952334    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:15.967071    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:15.967087    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:16.002782    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:42:16.002794    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:42:16.016604    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:42:16.016617    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:42:16.028859    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:42:16.028869    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:42:18.554448    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:23.556708    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:23.557168    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:23.594542    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:42:23.594685    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:23.617234    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:42:23.617324    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:23.632318    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:42:23.632401    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:23.645037    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:42:23.645114    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:23.655534    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:42:23.655605    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:23.666533    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:42:23.666605    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:23.677196    5099 logs.go:276] 0 containers: []
	W0723 07:42:23.677207    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:23.677264    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:23.687825    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:42:23.687839    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:42:23.687845    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:42:23.699823    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:42:23.699835    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:42:23.716725    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:42:23.716735    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:42:23.732605    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:42:23.732616    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:42:23.750705    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:42:23.750716    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:42:23.762859    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:42:23.762872    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:23.780647    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:23.780660    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:23.785294    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:42:23.785301    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:42:23.800322    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:42:23.800335    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:42:23.812268    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:23.812278    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:23.836906    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:42:23.836914    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:42:23.848738    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:23.848751    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:23.883665    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:42:23.883676    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:42:23.898416    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:42:23.898426    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:42:23.915590    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:23.915602    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:26.452628    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:31.454941    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:31.455288    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:31.496471    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:42:31.496630    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:31.519340    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:42:31.519456    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:31.535016    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:42:31.535102    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:31.548606    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:42:31.548678    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:31.563828    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:42:31.563892    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:31.574972    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:42:31.575046    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:31.588280    5099 logs.go:276] 0 containers: []
	W0723 07:42:31.588295    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:31.588360    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:31.598493    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:42:31.598510    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:42:31.598515    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:42:31.613946    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:42:31.613957    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:42:31.626457    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:42:31.626468    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:31.638569    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:31.638580    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:31.672361    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:31.672369    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:31.707073    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:42:31.707084    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:42:31.725379    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:42:31.725389    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:42:31.737446    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:42:31.737456    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:42:31.749002    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:31.749013    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:31.772720    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:42:31.772731    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:42:31.790292    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:31.790301    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:31.795404    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:42:31.795410    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:42:31.809534    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:42:31.809552    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:42:31.820894    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:42:31.820904    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:42:31.832374    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:42:31.832390    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:42:34.346880    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:39.349106    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:39.349248    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:39.360009    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:42:39.360082    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:39.370852    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:42:39.370927    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:39.381476    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:42:39.381547    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:39.392573    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:42:39.392635    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:39.403022    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:42:39.403100    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:39.413580    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:42:39.413649    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:39.423803    5099 logs.go:276] 0 containers: []
	W0723 07:42:39.423818    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:39.423874    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:39.434270    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:42:39.434289    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:39.434295    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:39.438973    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:42:39.438982    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:42:39.450247    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:39.450260    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:39.485484    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:42:39.485496    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:42:39.497363    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:42:39.497373    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:39.508665    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:42:39.508678    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:42:39.524257    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:42:39.524268    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:42:39.535827    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:42:39.535839    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:42:39.547846    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:39.547858    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:39.572663    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:39.572670    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:39.607744    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:42:39.607752    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:42:39.622074    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:42:39.622085    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:42:39.636392    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:42:39.636402    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:42:39.648219    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:42:39.648230    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:42:39.660322    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:42:39.660333    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:42:42.179854    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:47.182064    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:47.182411    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:47.215875    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:42:47.215993    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:47.232055    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:42:47.232131    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:47.247518    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:42:47.247598    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:47.258593    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:42:47.258667    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:47.269872    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:42:47.269939    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:47.280085    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:42:47.280145    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:47.291940    5099 logs.go:276] 0 containers: []
	W0723 07:42:47.291952    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:47.292015    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:47.302940    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:42:47.302955    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:42:47.302960    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:42:47.321426    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:42:47.321439    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:42:47.336783    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:42:47.336793    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:42:47.353007    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:42:47.353016    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:42:47.367686    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:47.367696    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:47.393051    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:42:47.393062    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:42:47.404751    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:42:47.404761    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:42:47.416933    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:42:47.416944    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:47.428582    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:42:47.428593    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:42:47.440598    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:42:47.440609    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:42:47.459074    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:47.459086    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:47.493165    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:47.493174    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:47.497507    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:47.497516    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:47.532478    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:42:47.532488    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:42:47.547236    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:42:47.547247    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:42:50.061111    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:55.063207    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:55.063430    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:55.081347    5099 logs.go:276] 1 containers: [20ae82625da3]
	I0723 07:42:55.081430    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:55.095028    5099 logs.go:276] 1 containers: [f976bacab302]
	I0723 07:42:55.095102    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:55.106957    5099 logs.go:276] 4 containers: [db1d5061742c b51097b64bf8 34decae0ac07 5b1d3e997c2c]
	I0723 07:42:55.107032    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:55.117762    5099 logs.go:276] 1 containers: [2e472c4af336]
	I0723 07:42:55.117824    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:55.129162    5099 logs.go:276] 1 containers: [16b96129458d]
	I0723 07:42:55.129234    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:55.140491    5099 logs.go:276] 1 containers: [25d1f3da9b58]
	I0723 07:42:55.140560    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:55.151426    5099 logs.go:276] 0 containers: []
	W0723 07:42:55.151439    5099 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:55.151493    5099 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:55.162208    5099 logs.go:276] 1 containers: [8b728ae10aec]
	I0723 07:42:55.162226    5099 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:55.162232    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:55.166816    5099 logs.go:123] Gathering logs for coredns [b51097b64bf8] ...
	I0723 07:42:55.166822    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b51097b64bf8"
	I0723 07:42:55.178167    5099 logs.go:123] Gathering logs for coredns [5b1d3e997c2c] ...
	I0723 07:42:55.178178    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b1d3e997c2c"
	I0723 07:42:55.190237    5099 logs.go:123] Gathering logs for kube-controller-manager [25d1f3da9b58] ...
	I0723 07:42:55.190248    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25d1f3da9b58"
	I0723 07:42:55.213295    5099 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:55.213305    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:55.249208    5099 logs.go:123] Gathering logs for coredns [db1d5061742c] ...
	I0723 07:42:55.249223    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db1d5061742c"
	I0723 07:42:55.261094    5099 logs.go:123] Gathering logs for kube-scheduler [2e472c4af336] ...
	I0723 07:42:55.261104    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e472c4af336"
	I0723 07:42:55.275701    5099 logs.go:123] Gathering logs for storage-provisioner [8b728ae10aec] ...
	I0723 07:42:55.275711    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b728ae10aec"
	I0723 07:42:55.286708    5099 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:55.286718    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:55.323183    5099 logs.go:123] Gathering logs for coredns [34decae0ac07] ...
	I0723 07:42:55.323194    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34decae0ac07"
	I0723 07:42:55.340757    5099 logs.go:123] Gathering logs for kube-proxy [16b96129458d] ...
	I0723 07:42:55.340768    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16b96129458d"
	I0723 07:42:55.352678    5099 logs.go:123] Gathering logs for container status ...
	I0723 07:42:55.352688    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:55.364461    5099 logs.go:123] Gathering logs for kube-apiserver [20ae82625da3] ...
	I0723 07:42:55.364475    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20ae82625da3"
	I0723 07:42:55.378976    5099 logs.go:123] Gathering logs for etcd [f976bacab302] ...
	I0723 07:42:55.378987    5099 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f976bacab302"
	I0723 07:42:55.394737    5099 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:55.394746    5099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:57.917518    5099 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:43:02.919745    5099 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:43:02.923416    5099 out.go:177] 
	W0723 07:43:02.927201    5099 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0723 07:43:02.927210    5099 out.go:239] * 
	* 
	W0723 07:43:02.927894    5099 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:43:02.939146    5099 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-350000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-23 07:43:03.046747 -0700 PDT m=+2845.711871626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-350000 -n running-upgrade-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-350000 -n running-upgrade-350000: exit status 2 (15.626840458s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-350000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-703000 sudo cat                            | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo cat                            | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo                                | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo                                | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo                                | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo cat                            | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo cat                            | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo                                | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo                                | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo                                | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo find                           | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-703000 sudo crio                           | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-703000                                     | cilium-703000             | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT | 23 Jul 24 07:32 PDT |
	| start   | -p kubernetes-upgrade-289000                         | kubernetes-upgrade-289000 | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-252000                             | offline-docker-252000     | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT | 23 Jul 24 07:32 PDT |
	| start   | -p stopped-upgrade-462000                            | minikube                  | jenkins | v1.26.0 | 23 Jul 24 07:32 PDT | 23 Jul 24 07:33 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-289000                         | kubernetes-upgrade-289000 | jenkins | v1.33.1 | 23 Jul 24 07:32 PDT | 23 Jul 24 07:33 PDT |
	| start   | -p kubernetes-upgrade-289000                         | kubernetes-upgrade-289000 | jenkins | v1.33.1 | 23 Jul 24 07:33 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-289000                         | kubernetes-upgrade-289000 | jenkins | v1.33.1 | 23 Jul 24 07:33 PDT | 23 Jul 24 07:33 PDT |
	| start   | -p running-upgrade-350000                            | minikube                  | jenkins | v1.26.0 | 23 Jul 24 07:33 PDT | 23 Jul 24 07:34 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-462000 stop                          | minikube                  | jenkins | v1.26.0 | 23 Jul 24 07:33 PDT | 23 Jul 24 07:34 PDT |
	| start   | -p stopped-upgrade-462000                            | stopped-upgrade-462000    | jenkins | v1.33.1 | 23 Jul 24 07:34 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-350000                            | running-upgrade-350000    | jenkins | v1.33.1 | 23 Jul 24 07:34 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-462000                            | stopped-upgrade-462000    | jenkins | v1.33.1 | 23 Jul 24 07:43 PDT | 23 Jul 24 07:43 PDT |
	| start   | -p pause-313000 --memory=2048                        | pause-313000              | jenkins | v1.33.1 | 23 Jul 24 07:43 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 07:43:15
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 07:43:15.597674    5304 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:43:15.597856    5304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:43:15.597863    5304 out.go:304] Setting ErrFile to fd 2...
	I0723 07:43:15.597864    5304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:43:15.597993    5304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:43:15.599015    5304 out.go:298] Setting JSON to false
	I0723 07:43:15.618724    5304 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4359,"bootTime":1721741436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:43:15.618825    5304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:43:15.623060    5304 out.go:177] * [pause-313000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:43:15.630126    5304 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:43:15.630211    5304 notify.go:220] Checking for updates...
	I0723 07:43:15.637028    5304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:43:15.638293    5304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:43:15.641055    5304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:43:15.644082    5304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:43:15.647126    5304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:43:15.650445    5304 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:43:15.650518    5304 config.go:182] Loaded profile config "running-upgrade-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0723 07:43:15.650582    5304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:43:15.655029    5304 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:43:15.662059    5304 start.go:297] selected driver: qemu2
	I0723 07:43:15.662062    5304 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:43:15.662068    5304 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:43:15.664896    5304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:43:15.668056    5304 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:43:15.671148    5304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:43:15.671162    5304 cni.go:84] Creating CNI manager for ""
	I0723 07:43:15.671173    5304 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:43:15.671175    5304 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:43:15.671199    5304 start.go:340] cluster config:
	{Name:pause-313000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:43:15.675405    5304 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:43:15.682083    5304 out.go:177] * Starting "pause-313000" primary control-plane node in "pause-313000" cluster
	I0723 07:43:15.685024    5304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:43:15.685039    5304 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:43:15.685047    5304 cache.go:56] Caching tarball of preloaded images
	I0723 07:43:15.685110    5304 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:43:15.685114    5304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:43:15.685185    5304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/pause-313000/config.json ...
	I0723 07:43:15.685195    5304 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/pause-313000/config.json: {Name:mke7933759b06fd465d24a13228ff70ea9cfa3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:43:15.685517    5304 start.go:360] acquireMachinesLock for pause-313000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:43:15.685544    5304 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "pause-313000"
	I0723 07:43:15.685552    5304 start.go:93] Provisioning new machine with config: &{Name:pause-313000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:pause-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:43:15.685578    5304 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:43:15.693017    5304 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0723 07:43:15.718555    5304 start.go:159] libmachine.API.Create for "pause-313000" (driver="qemu2")
	I0723 07:43:15.718583    5304 client.go:168] LocalClient.Create starting
	I0723 07:43:15.718673    5304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:43:15.718709    5304 main.go:141] libmachine: Decoding PEM data...
	I0723 07:43:15.718732    5304 main.go:141] libmachine: Parsing certificate...
	I0723 07:43:15.718771    5304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:43:15.718791    5304 main.go:141] libmachine: Decoding PEM data...
	I0723 07:43:15.718804    5304 main.go:141] libmachine: Parsing certificate...
	I0723 07:43:15.719160    5304 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:43:16.219342    5304 main.go:141] libmachine: Creating SSH key...
	I0723 07:43:16.287137    5304 main.go:141] libmachine: Creating Disk image...
	I0723 07:43:16.287141    5304 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:43:16.287306    5304 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/disk.qcow2
	I0723 07:43:16.302741    5304 main.go:141] libmachine: STDOUT: 
	I0723 07:43:16.302757    5304 main.go:141] libmachine: STDERR: 
	I0723 07:43:16.302819    5304 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/disk.qcow2 +20000M
	I0723 07:43:16.310826    5304 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:43:16.310838    5304 main.go:141] libmachine: STDERR: 
	I0723 07:43:16.310851    5304 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/disk.qcow2
	I0723 07:43:16.310857    5304 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:43:16.310870    5304 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:43:16.310892    5304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:19:ea:eb:33:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/pause-313000/disk.qcow2
	I0723 07:43:16.323276    5304 main.go:141] libmachine: STDOUT: 
	I0723 07:43:16.323292    5304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:43:16.323327    5304 client.go:171] duration metric: took 604.746459ms to LocalClient.Create
	I0723 07:43:18.325636    5304 start.go:128] duration metric: took 2.640071542s to createHost
	I0723 07:43:18.325741    5304 start.go:83] releasing machines lock for "pause-313000", held for 2.640240584s
	W0723 07:43:18.325830    5304 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:43:18.335996    5304 out.go:177] * Deleting "pause-313000" in qemu2 ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-07-23 14:33:42 UTC, ends at Tue 2024-07-23 14:43:18 UTC. --
	Jul 23 14:43:03 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:03Z" level=error msg="ContainerStats resp: {0x40008c7d00 linux}"
	Jul 23 14:43:03 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:03Z" level=error msg="ContainerStats resp: {0x40007eed00 linux}"
	Jul 23 14:43:03 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:03Z" level=error msg="ContainerStats resp: {0x40006c6c40 linux}"
	Jul 23 14:43:03 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:03Z" level=error msg="ContainerStats resp: {0x40006c73c0 linux}"
	Jul 23 14:43:04 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:04Z" level=error msg="ContainerStats resp: {0x40001cde80 linux}"
	Jul 23 14:43:05 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:05Z" level=error msg="ContainerStats resp: {0x40008c7600 linux}"
	Jul 23 14:43:05 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:05Z" level=error msg="ContainerStats resp: {0x40007ef680 linux}"
	Jul 23 14:43:05 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:05Z" level=error msg="ContainerStats resp: {0x40007efc00 linux}"
	Jul 23 14:43:05 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:05Z" level=error msg="ContainerStats resp: {0x40008c7ec0 linux}"
	Jul 23 14:43:05 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:05Z" level=error msg="ContainerStats resp: {0x40009da040 linux}"
	Jul 23 14:43:05 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:05Z" level=error msg="ContainerStats resp: {0x4000a107c0 linux}"
	Jul 23 14:43:05 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:05Z" level=error msg="ContainerStats resp: {0x40009da880 linux}"
	Jul 23 14:43:06 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:06Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 23 14:43:11 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 23 14:43:15 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:15Z" level=error msg="ContainerStats resp: {0x40007ee140 linux}"
	Jul 23 14:43:15 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:15Z" level=error msg="ContainerStats resp: {0x40008c7280 linux}"
	Jul 23 14:43:16 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 23 14:43:16 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:16Z" level=error msg="ContainerStats resp: {0x40008c66c0 linux}"
	Jul 23 14:43:17 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:17Z" level=error msg="ContainerStats resp: {0x40007efc80 linux}"
	Jul 23 14:43:17 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:17Z" level=error msg="ContainerStats resp: {0x40008c7f40 linux}"
	Jul 23 14:43:17 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:17Z" level=error msg="ContainerStats resp: {0x4000359040 linux}"
	Jul 23 14:43:17 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:17Z" level=error msg="ContainerStats resp: {0x40006c6040 linux}"
	Jul 23 14:43:17 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:17Z" level=error msg="ContainerStats resp: {0x40006c6900 linux}"
	Jul 23 14:43:17 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:17Z" level=error msg="ContainerStats resp: {0x40006c6d80 linux}"
	Jul 23 14:43:17 running-upgrade-350000 cri-dockerd[4294]: time="2024-07-23T14:43:17Z" level=error msg="ContainerStats resp: {0x40006c74c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fce01a125e08c       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   9fe7237552e0e
	e504bdd91b148       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   8ec5085b2e82f
	db1d5061742c0       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8ec5085b2e82f
	b51097b64bf84       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9fe7237552e0e
	16b96129458d9       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   29328c57be4c0
	8b728ae10aec6       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   e4be4e6730b3f
	25d1f3da9b587       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   93232c03cdda8
	2e472c4af3360       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   0e22e3fb3bc1a
	20ae82625da38       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   5e8dfff820e17
	f976bacab3020       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   25ea3c56cbea9
	
	
	==> coredns [b51097b64bf8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:59217->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:43667->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:48885->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:33840->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:33735->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:44041->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:58934->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:36536->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:48787->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6735459419816007701.2415959058325246055. HINFO: read udp 10.244.0.3:44263->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [db1d5061742c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:52362->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:56294->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:51032->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:51369->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:40847->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:55102->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:36412->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:56289->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:34414->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2895403394428245978.7505571834849214041. HINFO: read udp 10.244.0.2:54051->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e504bdd91b14] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1807611564952129799.2136536543893403610. HINFO: read udp 10.244.0.2:37000->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1807611564952129799.2136536543893403610. HINFO: read udp 10.244.0.2:47915->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1807611564952129799.2136536543893403610. HINFO: read udp 10.244.0.2:41169->10.0.2.3:53: i/o timeout
	
	
	==> coredns [fce01a125e08] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3956434686085543462.1443952979395892090. HINFO: read udp 10.244.0.3:48908->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3956434686085543462.1443952979395892090. HINFO: read udp 10.244.0.3:41886->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3956434686085543462.1443952979395892090. HINFO: read udp 10.244.0.3:36924->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-350000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-350000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=running-upgrade-350000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T07_39_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:38:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-350000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:43:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:39:02 +0000   Tue, 23 Jul 2024 14:38:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:39:02 +0000   Tue, 23 Jul 2024 14:38:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:39:02 +0000   Tue, 23 Jul 2024 14:38:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:39:02 +0000   Tue, 23 Jul 2024 14:39:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-350000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 da96e65d2f3347e1a96fcd2f1ac58887
	  System UUID:                da96e65d2f3347e1a96fcd2f1ac58887
	  Boot ID:                    4593686b-66db-4aee-8330-24f9e3cd2c13
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-rllz6                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-wzt69                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-350000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-350000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-350000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-q8724                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-350000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m23s)  kubelet          Node running-upgrade-350000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x5 over 4m23s)  kubelet          Node running-upgrade-350000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m23s)  kubelet          Node running-upgrade-350000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-350000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-350000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-350000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-350000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-350000 event: Registered Node running-upgrade-350000 in Controller
	
	
	==> dmesg <==
	[  +0.172694] systemd-fstab-generator[854]: Ignoring "noauto" for root device
	[  +0.074759] systemd-fstab-generator[865]: Ignoring "noauto" for root device
	[  +0.064177] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.180309] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +0.064212] systemd-fstab-generator[1037]: Ignoring "noauto" for root device
	[  +2.071555] systemd-fstab-generator[1264]: Ignoring "noauto" for root device
	[Jul23 14:34] systemd-fstab-generator[1782]: Ignoring "noauto" for root device
	[ +16.677091] kauditd_printk_skb: 86 callbacks suppressed
	[  +1.371770] systemd-fstab-generator[2619]: Ignoring "noauto" for root device
	[  +0.202318] systemd-fstab-generator[2658]: Ignoring "noauto" for root device
	[  +0.110539] systemd-fstab-generator[2669]: Ignoring "noauto" for root device
	[  +0.116904] systemd-fstab-generator[2682]: Ignoring "noauto" for root device
	[  +4.198730] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.420105] systemd-fstab-generator[4251]: Ignoring "noauto" for root device
	[  +0.085612] systemd-fstab-generator[4262]: Ignoring "noauto" for root device
	[  +0.081151] systemd-fstab-generator[4273]: Ignoring "noauto" for root device
	[  +0.096884] systemd-fstab-generator[4287]: Ignoring "noauto" for root device
	[  +2.460499] systemd-fstab-generator[4556]: Ignoring "noauto" for root device
	[  +2.556811] systemd-fstab-generator[4903]: Ignoring "noauto" for root device
	[  +1.263161] systemd-fstab-generator[5046]: Ignoring "noauto" for root device
	[  +4.826358] kauditd_printk_skb: 80 callbacks suppressed
	[  +6.450254] kauditd_printk_skb: 1 callbacks suppressed
	[Jul23 14:38] systemd-fstab-generator[13815]: Ignoring "noauto" for root device
	[Jul23 14:39] systemd-fstab-generator[14400]: Ignoring "noauto" for root device
	[  +0.476005] systemd-fstab-generator[14539]: Ignoring "noauto" for root device
	
	
	==> etcd [f976bacab302] <==
	{"level":"info","ts":"2024-07-23T14:38:57.801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-23T14:38:57.801Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-23T14:38:57.806Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T14:38:57.826Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T14:38:57.826Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T14:38:57.819Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-23T14:38:57.826Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-23T14:38:58.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-23T14:38:58.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-23T14:38:58.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-23T14:38:58.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-23T14:38:58.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-23T14:38:58.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-23T14:38:58.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-23T14:38:58.097Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:38:58.102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:38:58.102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:38:58.102Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:38:58.102Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-350000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T14:38:58.102Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:38:58.103Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T14:38:58.103Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:38:58.104Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-23T14:38:58.110Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T14:38:58.110Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:43:19 up 9 min,  0 users,  load average: 0.07, 0.19, 0.13
	Linux running-upgrade-350000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [20ae82625da3] <==
	I0723 14:38:59.470768       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0723 14:38:59.470858       1 cache.go:39] Caches are synced for autoregister controller
	I0723 14:38:59.470941       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0723 14:38:59.470979       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 14:38:59.471098       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0723 14:38:59.472006       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0723 14:38:59.517525       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0723 14:39:00.197823       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0723 14:39:00.374877       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0723 14:39:00.378067       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0723 14:39:00.378094       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0723 14:39:00.512884       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0723 14:39:00.524533       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0723 14:39:00.637923       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0723 14:39:00.640105       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0723 14:39:00.640531       1 controller.go:611] quota admission added evaluator for: endpoints
	I0723 14:39:00.642389       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0723 14:39:01.529443       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0723 14:39:01.932596       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0723 14:39:01.935823       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0723 14:39:01.943903       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0723 14:39:01.985181       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 14:39:14.981847       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0723 14:39:15.282054       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0723 14:39:16.462247       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [25d1f3da9b58] <==
	I0723 14:39:14.930719       1 shared_informer.go:262] Caches are synced for daemon sets
	I0723 14:39:14.930770       1 shared_informer.go:262] Caches are synced for TTL
	I0723 14:39:14.933259       1 shared_informer.go:262] Caches are synced for taint
	I0723 14:39:14.933468       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0723 14:39:14.933523       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0723 14:39:14.933670       1 event.go:294] "Event occurred" object="running-upgrade-350000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-350000 event: Registered Node running-upgrade-350000 in Controller"
	W0723 14:39:14.933930       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-350000. Assuming now as a timestamp.
	I0723 14:39:14.934002       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0723 14:39:14.944086       1 shared_informer.go:262] Caches are synced for node
	I0723 14:39:14.944106       1 range_allocator.go:173] Starting range CIDR allocator
	I0723 14:39:14.944120       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0723 14:39:14.944124       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0723 14:39:14.946896       1 range_allocator.go:374] Set node running-upgrade-350000 PodCIDR to [10.244.0.0/24]
	I0723 14:39:14.947721       1 shared_informer.go:262] Caches are synced for persistent volume
	I0723 14:39:14.949284       1 shared_informer.go:262] Caches are synced for resource quota
	I0723 14:39:14.968643       1 shared_informer.go:262] Caches are synced for resource quota
	I0723 14:39:14.978959       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0723 14:39:14.982537       1 shared_informer.go:262] Caches are synced for attach detach
	I0723 14:39:14.983335       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0723 14:39:15.284797       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q8724"
	I0723 14:39:15.380794       1 shared_informer.go:262] Caches are synced for garbage collector
	I0723 14:39:15.380803       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0723 14:39:15.386560       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-rllz6"
	I0723 14:39:15.389848       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wzt69"
	I0723 14:39:15.390810       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [16b96129458d] <==
	I0723 14:39:16.432436       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0723 14:39:16.432466       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0723 14:39:16.432478       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0723 14:39:16.458585       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0723 14:39:16.458592       1 server_others.go:206] "Using iptables Proxier"
	I0723 14:39:16.458605       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0723 14:39:16.458707       1 server.go:661] "Version info" version="v1.24.1"
	I0723 14:39:16.458711       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:39:16.459793       1 config.go:317] "Starting service config controller"
	I0723 14:39:16.459798       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0723 14:39:16.459818       1 config.go:226] "Starting endpoint slice config controller"
	I0723 14:39:16.459820       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0723 14:39:16.461231       1 config.go:444] "Starting node config controller"
	I0723 14:39:16.461235       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0723 14:39:16.560398       1 shared_informer.go:262] Caches are synced for service config
	I0723 14:39:16.560397       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0723 14:39:16.561530       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [2e472c4af336] <==
	W0723 14:38:59.435413       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:38:59.435431       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:38:59.435469       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 14:38:59.435494       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 14:38:59.435520       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 14:38:59.435550       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 14:38:59.435577       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 14:38:59.435594       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 14:38:59.435686       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:38:59.435708       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:38:59.435732       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:38:59.435750       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:38:59.435785       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 14:38:59.435809       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 14:38:59.435859       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:38:59.435892       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:39:00.253958       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 14:39:00.253991       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:39:00.262945       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 14:39:00.262966       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 14:39:00.359499       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:39:00.359539       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:39:00.409793       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0723 14:39:00.409859       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0723 14:39:00.834940       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-07-23 14:33:42 UTC, ends at Tue 2024-07-23 14:43:19 UTC. --
	Jul 23 14:39:02 running-upgrade-350000 kubelet[14406]: I0723 14:39:02.969545   14406 apiserver.go:52] "Watching apiserver"
	Jul 23 14:39:03 running-upgrade-350000 kubelet[14406]: I0723 14:39:03.391386   14406 reconciler.go:157] "Reconciler: start to sync state"
	Jul 23 14:39:03 running-upgrade-350000 kubelet[14406]: E0723 14:39:03.568724   14406 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-350000\" already exists" pod="kube-system/etcd-running-upgrade-350000"
	Jul 23 14:39:03 running-upgrade-350000 kubelet[14406]: E0723 14:39:03.769180   14406 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-350000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-350000"
	Jul 23 14:39:03 running-upgrade-350000 kubelet[14406]: E0723 14:39:03.968283   14406 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-350000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-350000"
	Jul 23 14:39:04 running-upgrade-350000 kubelet[14406]: I0723 14:39:04.164411   14406 request.go:601] Waited for 1.133292432s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 23 14:39:04 running-upgrade-350000 kubelet[14406]: E0723 14:39:04.166768   14406 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-350000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-350000"
	Jul 23 14:39:14 running-upgrade-350000 kubelet[14406]: I0723 14:39:14.941179   14406 topology_manager.go:200] "Topology Admit Handler"
	Jul 23 14:39:14 running-upgrade-350000 kubelet[14406]: I0723 14:39:14.985002   14406 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 23 14:39:14 running-upgrade-350000 kubelet[14406]: I0723 14:39:14.985816   14406 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.085494   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65d55b63-7a9f-4fdd-b9a7-5b329eec08e3-tmp\") pod \"storage-provisioner\" (UID: \"65d55b63-7a9f-4fdd-b9a7-5b329eec08e3\") " pod="kube-system/storage-provisioner"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.085518   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4cj\" (UniqueName: \"kubernetes.io/projected/65d55b63-7a9f-4fdd-b9a7-5b329eec08e3-kube-api-access-xq4cj\") pod \"storage-provisioner\" (UID: \"65d55b63-7a9f-4fdd-b9a7-5b329eec08e3\") " pod="kube-system/storage-provisioner"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.289282   14406 topology_manager.go:200] "Topology Admit Handler"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.390415   14406 topology_manager.go:200] "Topology Admit Handler"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.395507   14406 topology_manager.go:200] "Topology Admit Handler"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.488488   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7cc2c9c-54ac-4405-b2b1-1263eacd2d31-kube-proxy\") pod \"kube-proxy-q8724\" (UID: \"f7cc2c9c-54ac-4405-b2b1-1263eacd2d31\") " pod="kube-system/kube-proxy-q8724"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.488515   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7cc2c9c-54ac-4405-b2b1-1263eacd2d31-xtables-lock\") pod \"kube-proxy-q8724\" (UID: \"f7cc2c9c-54ac-4405-b2b1-1263eacd2d31\") " pod="kube-system/kube-proxy-q8724"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.488526   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f8dh\" (UniqueName: \"kubernetes.io/projected/f7cc2c9c-54ac-4405-b2b1-1263eacd2d31-kube-api-access-6f8dh\") pod \"kube-proxy-q8724\" (UID: \"f7cc2c9c-54ac-4405-b2b1-1263eacd2d31\") " pod="kube-system/kube-proxy-q8724"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.488535   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7cc2c9c-54ac-4405-b2b1-1263eacd2d31-lib-modules\") pod \"kube-proxy-q8724\" (UID: \"f7cc2c9c-54ac-4405-b2b1-1263eacd2d31\") " pod="kube-system/kube-proxy-q8724"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.589126   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7476d707-625b-4346-a9be-ffd4aa9fdc5e-config-volume\") pod \"coredns-6d4b75cb6d-rllz6\" (UID: \"7476d707-625b-4346-a9be-ffd4aa9fdc5e\") " pod="kube-system/coredns-6d4b75cb6d-rllz6"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.589221   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frnf\" (UniqueName: \"kubernetes.io/projected/f99e7527-bfb1-49d9-a06c-ff1b0f97a58e-kube-api-access-9frnf\") pod \"coredns-6d4b75cb6d-wzt69\" (UID: \"f99e7527-bfb1-49d9-a06c-ff1b0f97a58e\") " pod="kube-system/coredns-6d4b75cb6d-wzt69"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.589236   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdwpq\" (UniqueName: \"kubernetes.io/projected/7476d707-625b-4346-a9be-ffd4aa9fdc5e-kube-api-access-gdwpq\") pod \"coredns-6d4b75cb6d-rllz6\" (UID: \"7476d707-625b-4346-a9be-ffd4aa9fdc5e\") " pod="kube-system/coredns-6d4b75cb6d-rllz6"
	Jul 23 14:39:15 running-upgrade-350000 kubelet[14406]: I0723 14:39:15.589262   14406 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f99e7527-bfb1-49d9-a06c-ff1b0f97a58e-config-volume\") pod \"coredns-6d4b75cb6d-wzt69\" (UID: \"f99e7527-bfb1-49d9-a06c-ff1b0f97a58e\") " pod="kube-system/coredns-6d4b75cb6d-wzt69"
	Jul 23 14:43:04 running-upgrade-350000 kubelet[14406]: I0723 14:43:04.494671   14406 scope.go:110] "RemoveContainer" containerID="34decae0ac072b2573a813fffcb2cdf8e53548de5900a457c46cfcf41c272fc4"
	Jul 23 14:43:04 running-upgrade-350000 kubelet[14406]: I0723 14:43:04.517934   14406 scope.go:110] "RemoveContainer" containerID="5b1d3e997c2c03d2bf3803c2a120d7db0e57848fda1a5b26e2696a0133c39ad0"
	
	
	==> storage-provisioner [8b728ae10aec] <==
	I0723 14:39:15.439713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 14:39:15.443196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 14:39:15.443450       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 14:39:15.446630       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 14:39:15.446955       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d9c4073c-aab8-4dbc-abc8-69432867333c", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-350000_0c5b2802-58e7-4217-b9e2-cb0d2c90973c became leader
	I0723 14:39:15.448357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-350000_0c5b2802-58e7-4217-b9e2-cb0d2c90973c!
	I0723 14:39:15.548586       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-350000_0c5b2802-58e7-4217-b9e2-cb0d2c90973c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-350000 -n running-upgrade-350000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-350000 -n running-upgrade-350000: exit status 2 (15.660933542s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-350000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-350000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-350000
--- FAIL: TestRunningBinaryUpgrade (628.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-289000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-289000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.034387875s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-289000" primary control-plane node in "kubernetes-upgrade-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:32:47.964340    4989 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:32:47.964471    4989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:32:47.964477    4989 out.go:304] Setting ErrFile to fd 2...
	I0723 07:32:47.964479    4989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:32:47.964599    4989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:32:47.965682    4989 out.go:298] Setting JSON to false
	I0723 07:32:47.981291    4989 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3731,"bootTime":1721741436,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:32:47.981413    4989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:32:47.986367    4989 out.go:177] * [kubernetes-upgrade-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:32:47.993191    4989 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:32:47.993282    4989 notify.go:220] Checking for updates...
	I0723 07:32:48.001289    4989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:32:48.004295    4989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:32:48.007261    4989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:32:48.010301    4989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:32:48.011712    4989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:32:48.015700    4989 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:32:48.015761    4989 config.go:182] Loaded profile config "offline-docker-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:32:48.015807    4989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:32:48.020324    4989 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:32:48.025296    4989 start.go:297] selected driver: qemu2
	I0723 07:32:48.025300    4989 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:32:48.025305    4989 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:32:48.027430    4989 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:32:48.030309    4989 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:32:48.033366    4989 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 07:32:48.033379    4989 cni.go:84] Creating CNI manager for ""
	I0723 07:32:48.033385    4989 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0723 07:32:48.033413    4989 start.go:340] cluster config:
	{Name:kubernetes-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:32:48.037029    4989 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:32:48.044288    4989 out.go:177] * Starting "kubernetes-upgrade-289000" primary control-plane node in "kubernetes-upgrade-289000" cluster
	I0723 07:32:48.048251    4989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 07:32:48.048273    4989 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0723 07:32:48.048283    4989 cache.go:56] Caching tarball of preloaded images
	I0723 07:32:48.048336    4989 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:32:48.048341    4989 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0723 07:32:48.048401    4989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/kubernetes-upgrade-289000/config.json ...
	I0723 07:32:48.048412    4989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/kubernetes-upgrade-289000/config.json: {Name:mkc1b0e28bb41dab30eb6b40f836490772ea586c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:32:48.048765    4989 start.go:360] acquireMachinesLock for kubernetes-upgrade-289000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:32:48.195395    4989 start.go:364] duration metric: took 146.612583ms to acquireMachinesLock for "kubernetes-upgrade-289000"
	I0723 07:32:48.195520    4989 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:32:48.195710    4989 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:32:48.204090    4989 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:32:48.251105    4989 start.go:159] libmachine.API.Create for "kubernetes-upgrade-289000" (driver="qemu2")
	I0723 07:32:48.251173    4989 client.go:168] LocalClient.Create starting
	I0723 07:32:48.251283    4989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:32:48.251342    4989 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:48.251360    4989 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:48.251428    4989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:32:48.251470    4989 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:48.251488    4989 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:48.252261    4989 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:32:48.452881    4989 main.go:141] libmachine: Creating SSH key...
	I0723 07:32:48.488238    4989 main.go:141] libmachine: Creating Disk image...
	I0723 07:32:48.488243    4989 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:32:48.488423    4989 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2
	I0723 07:32:48.497852    4989 main.go:141] libmachine: STDOUT: 
	I0723 07:32:48.497869    4989 main.go:141] libmachine: STDERR: 
	I0723 07:32:48.497912    4989 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2 +20000M
	I0723 07:32:48.505714    4989 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:32:48.505729    4989 main.go:141] libmachine: STDERR: 
	I0723 07:32:48.505742    4989 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2
	I0723 07:32:48.505748    4989 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:32:48.505756    4989 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:32:48.505779    4989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:b5:2d:97:0d:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2
	I0723 07:32:48.507409    4989 main.go:141] libmachine: STDOUT: 
	I0723 07:32:48.507422    4989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:32:48.507444    4989 client.go:171] duration metric: took 256.268958ms to LocalClient.Create
	I0723 07:32:50.509584    4989 start.go:128] duration metric: took 2.313876209s to createHost
	I0723 07:32:50.509632    4989 start.go:83] releasing machines lock for "kubernetes-upgrade-289000", held for 2.31423575s
	W0723 07:32:50.509710    4989 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:50.516997    4989 out.go:177] * Deleting "kubernetes-upgrade-289000" in qemu2 ...
	W0723 07:32:50.549294    4989 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:50.549327    4989 start.go:729] Will try again in 5 seconds ...
	I0723 07:32:55.551504    4989 start.go:360] acquireMachinesLock for kubernetes-upgrade-289000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:32:55.605659    4989 start.go:364] duration metric: took 54.040333ms to acquireMachinesLock for "kubernetes-upgrade-289000"
	I0723 07:32:55.605862    4989 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:32:55.606064    4989 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:32:55.617371    4989 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:32:55.667242    4989 start.go:159] libmachine.API.Create for "kubernetes-upgrade-289000" (driver="qemu2")
	I0723 07:32:55.667294    4989 client.go:168] LocalClient.Create starting
	I0723 07:32:55.667418    4989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:32:55.667484    4989 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:55.667506    4989 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:55.667560    4989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:32:55.667590    4989 main.go:141] libmachine: Decoding PEM data...
	I0723 07:32:55.667605    4989 main.go:141] libmachine: Parsing certificate...
	I0723 07:32:55.668101    4989 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:32:55.869088    4989 main.go:141] libmachine: Creating SSH key...
	I0723 07:32:55.917855    4989 main.go:141] libmachine: Creating Disk image...
	I0723 07:32:55.917861    4989 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:32:55.918031    4989 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2
	I0723 07:32:55.927276    4989 main.go:141] libmachine: STDOUT: 
	I0723 07:32:55.927366    4989 main.go:141] libmachine: STDERR: 
	I0723 07:32:55.927419    4989 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2 +20000M
	I0723 07:32:55.935225    4989 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:32:55.935245    4989 main.go:141] libmachine: STDERR: 
	I0723 07:32:55.935257    4989 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2
	I0723 07:32:55.935261    4989 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:32:55.935269    4989 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:32:55.935301    4989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b6:ca:05:00:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2
	I0723 07:32:55.936852    4989 main.go:141] libmachine: STDOUT: 
	I0723 07:32:55.936869    4989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:32:55.936882    4989 client.go:171] duration metric: took 269.58825ms to LocalClient.Create
	I0723 07:32:57.938957    4989 start.go:128] duration metric: took 2.332876792s to createHost
	I0723 07:32:57.938991    4989 start.go:83] releasing machines lock for "kubernetes-upgrade-289000", held for 2.333348208s
	W0723 07:32:57.939144    4989 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:32:57.946472    4989 out.go:177] 
	W0723 07:32:57.950457    4989 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:32:57.950471    4989 out.go:239] * 
	* 
	W0723 07:32:57.951543    4989 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:32:57.964438    4989 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-289000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-289000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-289000: (3.296053209s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-289000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-289000 status --format={{.Host}}: exit status 7 (31.733625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-289000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-289000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.21196975s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-289000" primary control-plane node in "kubernetes-upgrade-289000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-289000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-289000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:33:01.333066    5037 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:33:01.333198    5037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:33:01.333203    5037 out.go:304] Setting ErrFile to fd 2...
	I0723 07:33:01.333205    5037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:33:01.333338    5037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:33:01.334382    5037 out.go:298] Setting JSON to false
	I0723 07:33:01.351572    5037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3745,"bootTime":1721741436,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:33:01.351633    5037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:33:01.356010    5037 out.go:177] * [kubernetes-upgrade-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:33:01.363052    5037 notify.go:220] Checking for updates...
	I0723 07:33:01.366939    5037 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:33:01.373940    5037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:33:01.382943    5037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:33:01.388874    5037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:33:01.396956    5037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:33:01.404938    5037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:33:01.409245    5037 config.go:182] Loaded profile config "kubernetes-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0723 07:33:01.409518    5037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:33:01.411894    5037 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:33:01.418758    5037 start.go:297] selected driver: qemu2
	I0723 07:33:01.418763    5037 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:33:01.418831    5037 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:33:01.421275    5037 cni.go:84] Creating CNI manager for ""
	I0723 07:33:01.421292    5037 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:33:01.421319    5037 start.go:340] cluster config:
	{Name:kubernetes-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-289000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:33:01.424852    5037 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:33:01.432948    5037 out.go:177] * Starting "kubernetes-upgrade-289000" primary control-plane node in "kubernetes-upgrade-289000" cluster
	I0723 07:33:01.436932    5037 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0723 07:33:01.436950    5037 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0723 07:33:01.436957    5037 cache.go:56] Caching tarball of preloaded images
	I0723 07:33:01.437015    5037 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:33:01.437021    5037 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0723 07:33:01.437079    5037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/kubernetes-upgrade-289000/config.json ...
	I0723 07:33:01.437355    5037 start.go:360] acquireMachinesLock for kubernetes-upgrade-289000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:33:01.437393    5037 start.go:364] duration metric: took 31.792µs to acquireMachinesLock for "kubernetes-upgrade-289000"
	I0723 07:33:01.437402    5037 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:33:01.437408    5037 fix.go:54] fixHost starting: 
	I0723 07:33:01.437523    5037 fix.go:112] recreateIfNeeded on kubernetes-upgrade-289000: state=Stopped err=<nil>
	W0723 07:33:01.437533    5037 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:33:01.440943    5037 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-289000" ...
	I0723 07:33:01.448953    5037 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:33:01.449010    5037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b6:ca:05:00:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2
	I0723 07:33:01.451116    5037 main.go:141] libmachine: STDOUT: 
	I0723 07:33:01.451138    5037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:33:01.451166    5037 fix.go:56] duration metric: took 13.757667ms for fixHost
	I0723 07:33:01.451171    5037 start.go:83] releasing machines lock for "kubernetes-upgrade-289000", held for 13.774583ms
	W0723 07:33:01.451177    5037 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:33:01.451211    5037 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:33:01.451215    5037 start.go:729] Will try again in 5 seconds ...
	I0723 07:33:06.451794    5037 start.go:360] acquireMachinesLock for kubernetes-upgrade-289000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:33:06.452442    5037 start.go:364] duration metric: took 495.625µs to acquireMachinesLock for "kubernetes-upgrade-289000"
	I0723 07:33:06.452586    5037 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:33:06.452608    5037 fix.go:54] fixHost starting: 
	I0723 07:33:06.453360    5037 fix.go:112] recreateIfNeeded on kubernetes-upgrade-289000: state=Stopped err=<nil>
	W0723 07:33:06.453389    5037 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:33:06.458570    5037 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-289000" ...
	I0723 07:33:06.467553    5037 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:33:06.467800    5037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b6:ca:05:00:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubernetes-upgrade-289000/disk.qcow2
	I0723 07:33:06.478210    5037 main.go:141] libmachine: STDOUT: 
	I0723 07:33:06.478283    5037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:33:06.478376    5037 fix.go:56] duration metric: took 25.771875ms for fixHost
	I0723 07:33:06.478400    5037 start.go:83] releasing machines lock for "kubernetes-upgrade-289000", held for 25.930459ms
	W0723 07:33:06.478627    5037 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-289000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-289000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:33:06.487466    5037 out.go:177] 
	W0723 07:33:06.491556    5037 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:33:06.491594    5037 out.go:239] * 
	* 
	W0723 07:33:06.493616    5037 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:33:06.501529    5037 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-289000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-289000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-289000 version --output=json: exit status 1 (60.103791ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-289000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-23 07:33:06.575431 -0700 PDT m=+2249.221412251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-289000 -n kubernetes-upgrade-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-289000 -n kubernetes-upgrade-289000: exit status 7 (33.095667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-289000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-289000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-289000
--- FAIL: TestKubernetesUpgrade (18.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (584.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3593293936 start -p stopped-upgrade-462000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3593293936 start -p stopped-upgrade-462000 --memory=2200 --vm-driver=qemu2 : (51.96700925s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3593293936 -p stopped-upgrade-462000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3593293936 -p stopped-upgrade-462000 stop: (12.095021166s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.516087709s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-462000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-462000" primary control-plane node in "stopped-upgrade-462000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-462000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:34:00.937865    5088 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:34:00.938013    5088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:34:00.938017    5088 out.go:304] Setting ErrFile to fd 2...
	I0723 07:34:00.938019    5088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:34:00.938166    5088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:34:00.939213    5088 out.go:298] Setting JSON to false
	I0723 07:34:00.955924    5088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3804,"bootTime":1721741436,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:34:00.956013    5088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:34:00.959913    5088 out.go:177] * [stopped-upgrade-462000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:34:00.966977    5088 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:34:00.967095    5088 notify.go:220] Checking for updates...
	I0723 07:34:00.972891    5088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:34:00.975933    5088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:34:00.978971    5088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:34:00.981914    5088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:34:00.984939    5088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:34:00.988167    5088 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0723 07:34:00.990864    5088 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0723 07:34:00.993952    5088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:34:00.997948    5088 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:34:01.004957    5088 start.go:297] selected driver: qemu2
	I0723 07:34:01.004962    5088 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0723 07:34:01.005011    5088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:34:01.007252    5088 cni.go:84] Creating CNI manager for ""
	I0723 07:34:01.007269    5088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:34:01.007305    5088 start.go:340] cluster config:
	{Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0723 07:34:01.007357    5088 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:34:01.015924    5088 out.go:177] * Starting "stopped-upgrade-462000" primary control-plane node in "stopped-upgrade-462000" cluster
	I0723 07:34:01.018872    5088 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0723 07:34:01.018889    5088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0723 07:34:01.018896    5088 cache.go:56] Caching tarball of preloaded images
	I0723 07:34:01.018942    5088 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:34:01.018947    5088 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0723 07:34:01.018993    5088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/config.json ...
	I0723 07:34:01.019317    5088 start.go:360] acquireMachinesLock for stopped-upgrade-462000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:34:01.019347    5088 start.go:364] duration metric: took 23.625µs to acquireMachinesLock for "stopped-upgrade-462000"
	I0723 07:34:01.019356    5088 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:34:01.019362    5088 fix.go:54] fixHost starting: 
	I0723 07:34:01.019462    5088 fix.go:112] recreateIfNeeded on stopped-upgrade-462000: state=Stopped err=<nil>
	W0723 07:34:01.019469    5088 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:34:01.027755    5088 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-462000" ...
	I0723 07:34:01.031899    5088 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:34:01.031958    5088 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50234-:22,hostfwd=tcp::50235-:2376,hostname=stopped-upgrade-462000 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/disk.qcow2
	I0723 07:34:01.068653    5088 main.go:141] libmachine: STDOUT: 
	I0723 07:34:01.068680    5088 main.go:141] libmachine: STDERR: 
	I0723 07:34:01.068685    5088 main.go:141] libmachine: Waiting for VM to start (ssh -p 50234 docker@127.0.0.1)...
	I0723 07:34:21.409017    5088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/config.json ...
	I0723 07:34:21.409227    5088 machine.go:94] provisionDockerMachine start ...
	I0723 07:34:21.409281    5088 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:21.409417    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102606a10] 0x102609270 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0723 07:34:21.409421    5088 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 07:34:21.474510    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 07:34:21.474530    5088 buildroot.go:166] provisioning hostname "stopped-upgrade-462000"
	I0723 07:34:21.474594    5088 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:21.474719    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102606a10] 0x102609270 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0723 07:34:21.474726    5088 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-462000 && echo "stopped-upgrade-462000" | sudo tee /etc/hostname
	I0723 07:34:21.542582    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-462000
	
	I0723 07:34:21.542645    5088 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:21.542778    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102606a10] 0x102609270 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0723 07:34:21.542788    5088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-462000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-462000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-462000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 07:34:21.614087    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 07:34:21.614100    5088 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19319-1567/.minikube CaCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19319-1567/.minikube}
	I0723 07:34:21.614109    5088 buildroot.go:174] setting up certificates
	I0723 07:34:21.614113    5088 provision.go:84] configureAuth start
	I0723 07:34:21.614121    5088 provision.go:143] copyHostCerts
	I0723 07:34:21.614197    5088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem, removing ...
	I0723 07:34:21.614203    5088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem
	I0723 07:34:21.614294    5088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.pem (1078 bytes)
	I0723 07:34:21.614475    5088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem, removing ...
	I0723 07:34:21.614481    5088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem
	I0723 07:34:21.614528    5088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/cert.pem (1123 bytes)
	I0723 07:34:21.614633    5088 exec_runner.go:144] found /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem, removing ...
	I0723 07:34:21.614636    5088 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem
	I0723 07:34:21.614678    5088 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19319-1567/.minikube/key.pem (1679 bytes)
	I0723 07:34:21.614762    5088 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-462000 san=[127.0.0.1 localhost minikube stopped-upgrade-462000]
	I0723 07:34:21.762835    5088 provision.go:177] copyRemoteCerts
	I0723 07:34:21.762887    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 07:34:21.762895    5088 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0723 07:34:21.803438    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 07:34:21.811263    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0723 07:34:21.818516    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 07:34:21.826194    5088 provision.go:87] duration metric: took 212.077458ms to configureAuth
	I0723 07:34:21.826218    5088 buildroot.go:189] setting minikube options for container-runtime
	I0723 07:34:21.826354    5088 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0723 07:34:21.826393    5088 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:21.826487    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102606a10] 0x102609270 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0723 07:34:21.826493    5088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0723 07:34:21.892059    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0723 07:34:21.892074    5088 buildroot.go:70] root file system type: tmpfs
	I0723 07:34:21.892129    5088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0723 07:34:21.892188    5088 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:21.892311    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102606a10] 0x102609270 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0723 07:34:21.892346    5088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0723 07:34:21.962317    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0723 07:34:21.962378    5088 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:21.962492    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102606a10] 0x102609270 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0723 07:34:21.962501    5088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0723 07:34:22.301568    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0723 07:34:22.301581    5088 machine.go:97] duration metric: took 892.364667ms to provisionDockerMachine
	I0723 07:34:22.301588    5088 start.go:293] postStartSetup for "stopped-upgrade-462000" (driver="qemu2")
	I0723 07:34:22.301595    5088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 07:34:22.301648    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 07:34:22.301659    5088 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0723 07:34:22.336547    5088 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 07:34:22.338268    5088 info.go:137] Remote host: Buildroot 2021.02.12
	I0723 07:34:22.338279    5088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19319-1567/.minikube/addons for local assets ...
	I0723 07:34:22.338373    5088 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19319-1567/.minikube/files for local assets ...
	I0723 07:34:22.338502    5088 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem -> 20652.pem in /etc/ssl/certs
	I0723 07:34:22.338633    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 07:34:22.341925    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem --> /etc/ssl/certs/20652.pem (1708 bytes)
	I0723 07:34:22.349731    5088 start.go:296] duration metric: took 48.135458ms for postStartSetup
	I0723 07:34:22.349752    5088 fix.go:56] duration metric: took 21.330776167s for fixHost
	I0723 07:34:22.349800    5088 main.go:141] libmachine: Using SSH client type: native
	I0723 07:34:22.349920    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102606a10] 0x102609270 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0723 07:34:22.349925    5088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0723 07:34:22.415035    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721745262.613066338
	
	I0723 07:34:22.415045    5088 fix.go:216] guest clock: 1721745262.613066338
	I0723 07:34:22.415050    5088 fix.go:229] Guest: 2024-07-23 07:34:22.613066338 -0700 PDT Remote: 2024-07-23 07:34:22.349754 -0700 PDT m=+21.433355210 (delta=263.312338ms)
	I0723 07:34:22.415061    5088 fix.go:200] guest clock delta is within tolerance: 263.312338ms
	I0723 07:34:22.415064    5088 start.go:83] releasing machines lock for "stopped-upgrade-462000", held for 21.396099417s
	I0723 07:34:22.415149    5088 ssh_runner.go:195] Run: cat /version.json
	I0723 07:34:22.415159    5088 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0723 07:34:22.415195    5088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 07:34:22.415239    5088 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	W0723 07:34:22.415971    5088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50421->127.0.0.1:50234: read: connection reset by peer
	I0723 07:34:22.415987    5088 retry.go:31] will retry after 313.239899ms: ssh: handshake failed: read tcp 127.0.0.1:50421->127.0.0.1:50234: read: connection reset by peer
	W0723 07:34:22.451341    5088 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0723 07:34:22.451415    5088 ssh_runner.go:195] Run: systemctl --version
	I0723 07:34:22.453583    5088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 07:34:22.455469    5088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 07:34:22.455504    5088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0723 07:34:22.458494    5088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0723 07:34:22.463983    5088 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 07:34:22.464000    5088 start.go:495] detecting cgroup driver to use...
	I0723 07:34:22.464098    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 07:34:22.471726    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0723 07:34:22.475276    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0723 07:34:22.478624    5088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0723 07:34:22.478683    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0723 07:34:22.482510    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0723 07:34:22.486368    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0723 07:34:22.490071    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0723 07:34:22.494123    5088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 07:34:22.497914    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0723 07:34:22.501808    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0723 07:34:22.505528    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0723 07:34:22.509070    5088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 07:34:22.512139    5088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 07:34:22.515576    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:22.596128    5088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0723 07:34:22.602166    5088 start.go:495] detecting cgroup driver to use...
	I0723 07:34:22.602352    5088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0723 07:34:22.610645    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 07:34:22.616021    5088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 07:34:22.627924    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 07:34:22.632989    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0723 07:34:22.637777    5088 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0723 07:34:22.665427    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0723 07:34:22.670230    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 07:34:22.675781    5088 ssh_runner.go:195] Run: which cri-dockerd
	I0723 07:34:22.676896    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0723 07:34:22.679709    5088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0723 07:34:22.684670    5088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0723 07:34:22.750418    5088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0723 07:34:22.832591    5088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0723 07:34:22.832649    5088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0723 07:34:22.839184    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:22.925576    5088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0723 07:34:24.048121    5088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.122547708s)
	I0723 07:34:24.048177    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0723 07:34:24.055355    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0723 07:34:24.059802    5088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0723 07:34:24.122976    5088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0723 07:34:24.205669    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:24.283997    5088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0723 07:34:24.289898    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0723 07:34:24.294188    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:24.359766    5088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0723 07:34:24.398142    5088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0723 07:34:24.398232    5088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0723 07:34:24.400193    5088 start.go:563] Will wait 60s for crictl version
	I0723 07:34:24.400250    5088 ssh_runner.go:195] Run: which crictl
	I0723 07:34:24.401583    5088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 07:34:24.416608    5088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0723 07:34:24.416679    5088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0723 07:34:24.433455    5088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0723 07:34:24.460374    5088 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0723 07:34:24.460449    5088 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0723 07:34:24.461987    5088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 07:34:24.466067    5088 kubeadm.go:883] updating cluster {Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0723 07:34:24.466119    5088 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0723 07:34:24.466169    5088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0723 07:34:24.478137    5088 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0723 07:34:24.478157    5088 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0723 07:34:24.478206    5088 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0723 07:34:24.482127    5088 ssh_runner.go:195] Run: which lz4
	I0723 07:34:24.484030    5088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0723 07:34:24.485585    5088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 07:34:24.485617    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0723 07:34:25.386549    5088 docker.go:649] duration metric: took 902.586041ms to copy over tarball
	I0723 07:34:25.386620    5088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 07:34:26.574088    5088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1874745s)
	I0723 07:34:26.574107    5088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 07:34:26.591370    5088 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0723 07:34:26.594646    5088 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0723 07:34:26.600276    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:26.666956    5088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0723 07:34:28.102289    5088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.43534075s)
	I0723 07:34:28.102384    5088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0723 07:34:28.115370    5088 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0723 07:34:28.115379    5088 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0723 07:34:28.115384    5088 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 07:34:28.121182    5088 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:28.123024    5088 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:28.124942    5088 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:28.125118    5088 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:28.126925    5088 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:28.127089    5088 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:28.128277    5088 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:28.128564    5088 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:28.130151    5088 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:28.130313    5088 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:28.131335    5088 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0723 07:34:28.132789    5088 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:28.132909    5088 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:28.133882    5088 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0723 07:34:28.133879    5088 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:28.134673    5088 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:28.614993    5088 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:28.621081    5088 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:28.622504    5088 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:28.634594    5088 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0723 07:34:28.634624    5088 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:28.634684    5088 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0723 07:34:28.636697    5088 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:28.638605    5088 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0723 07:34:28.638623    5088 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:28.638661    5088 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0723 07:34:28.652029    5088 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0723 07:34:28.652051    5088 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:28.652112    5088 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0723 07:34:28.659380    5088 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0723 07:34:28.659435    5088 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0723 07:34:28.659501    5088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0723 07:34:28.663755    5088 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0723 07:34:28.663776    5088 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0723 07:34:28.663830    5088 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0723 07:34:28.663833    5088 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W0723 07:34:28.666403    5088 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0723 07:34:28.666513    5088 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:28.680259    5088 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0723 07:34:28.680293    5088 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0723 07:34:28.680315    5088 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0723 07:34:28.680292    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0723 07:34:28.680362    5088 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0723 07:34:28.680370    5088 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0723 07:34:28.689394    5088 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:28.697304    5088 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0723 07:34:28.697329    5088 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:28.697338    5088 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0723 07:34:28.697385    5088 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0723 07:34:28.697446    5088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0723 07:34:28.697582    5088 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0723 07:34:28.739915    5088 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0723 07:34:28.740040    5088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0723 07:34:28.740063    5088 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0723 07:34:28.740077    5088 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0723 07:34:28.740096    5088 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:28.740130    5088 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0723 07:34:28.740125    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0723 07:34:28.746106    5088 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0723 07:34:28.746196    5088 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:28.767137    5088 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0723 07:34:28.767158    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0723 07:34:28.789256    5088 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0723 07:34:28.789289    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0723 07:34:28.789331    5088 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0723 07:34:28.789339    5088 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0723 07:34:28.789348    5088 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:28.789390    5088 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:34:28.849609    5088 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0723 07:34:28.849609    5088 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0723 07:34:28.849744    5088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0723 07:34:28.875776    5088 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0723 07:34:28.875814    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0723 07:34:28.899567    5088 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0723 07:34:28.899582    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0723 07:34:29.060405    5088 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0723 07:34:29.060452    5088 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0723 07:34:29.060465    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0723 07:34:29.332395    5088 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0723 07:34:29.332417    5088 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0723 07:34:29.332431    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0723 07:34:29.464571    5088 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0723 07:34:29.464608    5088 cache_images.go:92] duration metric: took 1.349240916s to LoadCachedImages
	W0723 07:34:29.464648    5088 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0723 07:34:29.464655    5088 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0723 07:34:29.464705    5088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-462000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 07:34:29.464782    5088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0723 07:34:29.478523    5088 cni.go:84] Creating CNI manager for ""
	I0723 07:34:29.478535    5088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:34:29.478540    5088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 07:34:29.478548    5088 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-462000 NodeName:stopped-upgrade-462000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 07:34:29.478615    5088 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-462000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 07:34:29.478670    5088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0723 07:34:29.481818    5088 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 07:34:29.481850    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 07:34:29.485015    5088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0723 07:34:29.490126    5088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 07:34:29.495418    5088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0723 07:34:29.500922    5088 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0723 07:34:29.502338    5088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 07:34:29.506074    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:34:29.591892    5088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 07:34:29.601607    5088 certs.go:68] Setting up /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000 for IP: 10.0.2.15
	I0723 07:34:29.601620    5088 certs.go:194] generating shared ca certs ...
	I0723 07:34:29.601629    5088 certs.go:226] acquiring lock for ca certs: {Name:mk3c99e95d37931a4e7b239d14c48fdfa53d0dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:34:29.601793    5088 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.key
	I0723 07:34:29.601842    5088 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.key
	I0723 07:34:29.601850    5088 certs.go:256] generating profile certs ...
	I0723 07:34:29.601925    5088 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/client.key
	I0723 07:34:29.601942    5088 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3
	I0723 07:34:29.601952    5088 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0723 07:34:29.677083    5088 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 ...
	I0723 07:34:29.677098    5088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3: {Name:mkb507290e49f921ad3d78312880e10b91c3999d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:34:29.677354    5088 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3 ...
	I0723 07:34:29.677360    5088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3: {Name:mk74e990d7b5de4ebb50eec4cc4c94a78817f507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:34:29.677507    5088 certs.go:381] copying /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.crt.e7e0aef3 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.crt
	I0723 07:34:29.677647    5088 certs.go:385] copying /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.key.e7e0aef3 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.key
	I0723 07:34:29.677805    5088 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/proxy-client.key
	I0723 07:34:29.677936    5088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065.pem (1338 bytes)
	W0723 07:34:29.677966    5088 certs.go:480] ignoring /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065_empty.pem, impossibly tiny 0 bytes
	I0723 07:34:29.677971    5088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 07:34:29.677995    5088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem (1078 bytes)
	I0723 07:34:29.678013    5088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem (1123 bytes)
	I0723 07:34:29.678032    5088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/key.pem (1679 bytes)
	I0723 07:34:29.678073    5088 certs.go:484] found cert: /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem (1708 bytes)
	I0723 07:34:29.678426    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 07:34:29.686604    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0723 07:34:29.694172    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 07:34:29.701877    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0723 07:34:29.709758    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 07:34:29.717758    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 07:34:29.725487    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 07:34:29.733685    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 07:34:29.742029    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 07:34:29.750171    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/2065.pem --> /usr/share/ca-certificates/2065.pem (1338 bytes)
	I0723 07:34:29.758429    5088 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/ssl/certs/20652.pem --> /usr/share/ca-certificates/20652.pem (1708 bytes)
	I0723 07:34:29.768283    5088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 07:34:29.775120    5088 ssh_runner.go:195] Run: openssl version
	I0723 07:34:29.777596    5088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 07:34:29.781599    5088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:34:29.783876    5088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:34:29.783938    5088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 07:34:29.786150    5088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 07:34:29.790056    5088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2065.pem && ln -fs /usr/share/ca-certificates/2065.pem /etc/ssl/certs/2065.pem"
	I0723 07:34:29.794048    5088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2065.pem
	I0723 07:34:29.796188    5088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:03 /usr/share/ca-certificates/2065.pem
	I0723 07:34:29.796231    5088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2065.pem
	I0723 07:34:29.798491    5088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2065.pem /etc/ssl/certs/51391683.0"
	I0723 07:34:29.802001    5088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20652.pem && ln -fs /usr/share/ca-certificates/20652.pem /etc/ssl/certs/20652.pem"
	I0723 07:34:29.805850    5088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20652.pem
	I0723 07:34:29.807577    5088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:03 /usr/share/ca-certificates/20652.pem
	I0723 07:34:29.807619    5088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20652.pem
	I0723 07:34:29.809884    5088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20652.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 07:34:29.813745    5088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 07:34:29.815423    5088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 07:34:29.817490    5088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 07:34:29.819734    5088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 07:34:29.821858    5088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 07:34:29.824127    5088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 07:34:29.826419    5088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 07:34:29.828880    5088 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0723 07:34:29.828980    5088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0723 07:34:29.846948    5088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 07:34:29.850265    5088 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 07:34:29.850273    5088 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 07:34:29.850313    5088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 07:34:29.857729    5088 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 07:34:29.857991    5088 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-462000" does not appear in /Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:34:29.858041    5088 kubeconfig.go:62] /Users/jenkins/minikube-integration/19319-1567/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-462000" cluster setting kubeconfig missing "stopped-upgrade-462000" context setting]
	I0723 07:34:29.858182    5088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/kubeconfig: {Name:mkd61b3eb94b798a54b8f29057406aee7268d37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:34:29.860562    5088 kapi.go:59] client config for stopped-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10399bfa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0723 07:34:29.860908    5088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 07:34:29.864321    5088 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-462000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0723 07:34:29.864330    5088 kubeadm.go:1160] stopping kube-system containers ...
	I0723 07:34:29.864396    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0723 07:34:29.876922    5088 docker.go:483] Stopping containers: [996159a4fc51 838eaea70f87 b1c9f2805558 e624895bef16 dbab5bb04e05 0dc374afc951 5d3ec9f21f84 c0e160d14428]
	I0723 07:34:29.877017    5088 ssh_runner.go:195] Run: docker stop 996159a4fc51 838eaea70f87 b1c9f2805558 e624895bef16 dbab5bb04e05 0dc374afc951 5d3ec9f21f84 c0e160d14428
	I0723 07:34:29.889522    5088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 07:34:29.896431    5088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 07:34:29.899955    5088 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 07:34:29.899965    5088 kubeadm.go:157] found existing configuration files:
	
	I0723 07:34:29.900007    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf
	I0723 07:34:29.903090    5088 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 07:34:29.903130    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 07:34:29.906065    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf
	I0723 07:34:29.908935    5088 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 07:34:29.908963    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 07:34:29.911534    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf
	I0723 07:34:29.913900    5088 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 07:34:29.913931    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 07:34:29.917052    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf
	I0723 07:34:29.919555    5088 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 07:34:29.919576    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 07:34:29.922119    5088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 07:34:29.925382    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:29.948213    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:30.335343    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:30.454165    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:30.474553    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 07:34:30.500455    5088 api_server.go:52] waiting for apiserver process to appear ...
	I0723 07:34:30.500531    5088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:31.002820    5088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:31.502574    5088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:34:31.506463    5088 api_server.go:72] duration metric: took 1.006028083s to wait for apiserver process to appear ...
	I0723 07:34:31.506473    5088 api_server.go:88] waiting for apiserver healthz status ...
	I0723 07:34:31.506482    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:34:36.508659    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:34:36.508748    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:34:41.509796    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:34:41.509815    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:34:46.510669    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:34:46.510706    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:34:51.511568    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:34:51.511589    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:34:56.512630    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:34:56.512731    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:01.514683    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:01.514759    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:06.517101    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:06.517145    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:11.519380    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:11.519435    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:16.521624    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:16.521668    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:21.523948    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:21.524015    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:26.525973    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:26.526031    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:31.528325    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:31.528537    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:35:31.546057    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:35:31.546154    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:35:31.559531    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:35:31.559611    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:35:31.571477    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:35:31.571547    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:35:31.581545    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:35:31.581629    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:35:31.591967    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:35:31.592042    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:35:31.602537    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:35:31.602607    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:35:31.612956    5088 logs.go:276] 0 containers: []
	W0723 07:35:31.612967    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:35:31.613027    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:35:31.626773    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:35:31.626791    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:35:31.626799    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:35:31.705653    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:35:31.705666    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:35:31.723592    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:35:31.723604    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:35:31.749167    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:35:31.749176    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:35:31.761027    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:35:31.761039    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:35:31.775577    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:35:31.775587    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:35:31.796662    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:35:31.796673    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:35:31.812548    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:35:31.812562    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:35:31.830599    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:35:31.830609    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:35:31.860408    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:35:31.860420    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:35:31.864920    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:35:31.864928    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:35:31.877233    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:35:31.877243    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:35:31.892268    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:35:31.892281    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:35:31.906171    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:35:31.906186    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:35:31.921027    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:35:31.921037    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:35:31.939236    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:35:31.939248    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:35:34.450839    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:39.453117    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:39.453290    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:35:39.466934    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:35:39.467011    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:35:39.478777    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:35:39.478847    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:35:39.490083    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:35:39.490157    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:35:39.500893    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:35:39.500981    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:35:39.511040    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:35:39.511113    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:35:39.521799    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:35:39.521876    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:35:39.532257    5088 logs.go:276] 0 containers: []
	W0723 07:35:39.532269    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:35:39.532329    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:35:39.542129    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:35:39.542146    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:35:39.542151    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:35:39.559605    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:35:39.559615    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:35:39.564589    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:35:39.564598    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:35:39.577691    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:35:39.577701    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:35:39.598010    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:35:39.598022    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:35:39.613930    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:35:39.613944    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:35:39.625437    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:35:39.625448    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:35:39.655751    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:35:39.655759    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:35:39.670938    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:35:39.670948    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:35:39.682443    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:35:39.682454    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:35:39.695435    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:35:39.695453    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:35:39.709309    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:35:39.709320    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:35:39.727048    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:35:39.727058    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:35:39.751853    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:35:39.751861    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:35:39.763404    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:35:39.763414    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:35:39.805447    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:35:39.805460    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:35:42.321457    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:47.323702    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:47.323861    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:35:47.335074    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:35:47.335149    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:35:47.345957    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:35:47.346037    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:35:47.357072    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:35:47.357147    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:35:47.368922    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:35:47.368999    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:35:47.381298    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:35:47.381373    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:35:47.391947    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:35:47.392015    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:35:47.405136    5088 logs.go:276] 0 containers: []
	W0723 07:35:47.405147    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:35:47.405210    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:35:47.415995    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:35:47.416013    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:35:47.416018    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:35:47.420310    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:35:47.420317    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:35:47.434459    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:35:47.434470    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:35:47.455593    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:35:47.455603    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:35:47.466952    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:35:47.466963    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:35:47.482192    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:35:47.482202    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:35:47.494636    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:35:47.494649    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:35:47.514491    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:35:47.514503    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:35:47.526360    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:35:47.526370    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:35:47.539977    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:35:47.539987    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:35:47.552267    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:35:47.552280    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:35:47.573113    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:35:47.573122    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:35:47.584690    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:35:47.584700    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:35:47.613649    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:35:47.613657    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:35:47.649590    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:35:47.649602    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:35:47.663478    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:35:47.663489    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:35:50.193307    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:35:55.195913    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:35:55.196153    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:35:55.220908    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:35:55.221034    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:35:55.237547    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:35:55.237630    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:35:55.250769    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:35:55.250857    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:35:55.262250    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:35:55.262340    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:35:55.279113    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:35:55.279176    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:35:55.290185    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:35:55.290249    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:35:55.300365    5088 logs.go:276] 0 containers: []
	W0723 07:35:55.300379    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:35:55.300442    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:35:55.312170    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:35:55.312188    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:35:55.312193    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:35:55.328137    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:35:55.328148    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:35:55.349256    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:35:55.349267    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:35:55.361271    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:35:55.361286    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:35:55.373734    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:35:55.373745    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:35:55.402550    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:35:55.402560    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:35:55.416367    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:35:55.416378    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:35:55.437294    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:35:55.437306    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:35:55.450945    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:35:55.450954    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:35:55.489132    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:35:55.489143    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:35:55.506494    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:35:55.506507    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:35:55.517999    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:35:55.518011    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:35:55.533427    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:35:55.533438    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:35:55.551320    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:35:55.551330    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:35:55.562642    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:35:55.562652    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:35:55.588350    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:35:55.588362    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:35:58.094576    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:03.096866    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:03.097073    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:03.117999    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:36:03.118109    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:03.133043    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:36:03.133140    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:03.147149    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:36:03.147221    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:03.158118    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:36:03.158187    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:03.174285    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:36:03.174350    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:03.186658    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:36:03.186733    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:03.196796    5088 logs.go:276] 0 containers: []
	W0723 07:36:03.196809    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:03.196868    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:03.206863    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:36:03.206887    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:03.206892    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:03.232102    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:36:03.232113    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:03.244196    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:36:03.244209    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:36:03.258483    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:36:03.258493    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:36:03.273049    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:36:03.273060    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:36:03.288266    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:36:03.288279    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:36:03.306394    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:36:03.306407    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:36:03.324222    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:03.324231    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:03.328250    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:36:03.328258    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:36:03.339924    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:36:03.339935    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:36:03.351567    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:36:03.351578    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:36:03.365552    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:36:03.365564    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:36:03.379133    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:36:03.379150    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:36:03.402428    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:36:03.402438    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:36:03.414117    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:03.414128    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:03.443667    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:03.443678    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:05.981749    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:10.983993    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:10.984148    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:11.002026    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:36:11.002121    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:11.013740    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:36:11.013812    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:11.023712    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:36:11.023774    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:11.034293    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:36:11.034362    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:11.047415    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:36:11.047482    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:11.063930    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:36:11.064006    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:11.074338    5088 logs.go:276] 0 containers: []
	W0723 07:36:11.074350    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:11.074406    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:11.084713    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:36:11.084732    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:11.084737    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:11.113763    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:11.113776    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:11.171329    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:36:11.171341    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:36:11.187493    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:36:11.187504    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:36:11.204928    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:36:11.204939    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:36:11.223630    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:36:11.223644    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:11.235737    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:36:11.235747    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:36:11.256199    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:36:11.256212    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:36:11.269930    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:36:11.269940    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:36:11.294340    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:36:11.294350    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:36:11.317828    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:11.317838    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:11.322141    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:36:11.322150    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:36:11.339189    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:36:11.339200    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:36:11.357640    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:36:11.357654    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:36:11.372047    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:36:11.372056    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:36:11.383359    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:11.383370    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:13.911581    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:18.913933    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:18.914157    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:18.939186    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:36:18.939310    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:18.955247    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:36:18.955338    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:18.967918    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:36:18.967989    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:18.979145    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:36:18.979225    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:18.989336    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:36:18.989403    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:18.999883    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:36:18.999951    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:19.009688    5088 logs.go:276] 0 containers: []
	W0723 07:36:19.009700    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:19.009757    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:19.020421    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:36:19.020437    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:36:19.020442    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:36:19.034227    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:36:19.034236    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:36:19.046839    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:36:19.046852    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:36:19.058124    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:36:19.058135    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:36:19.076097    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:19.076107    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:19.101435    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:19.101442    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:19.130779    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:19.130786    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:19.134655    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:36:19.134664    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:36:19.149348    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:36:19.149359    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:36:19.173873    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:36:19.173882    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:36:19.197423    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:36:19.197432    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:36:19.213720    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:36:19.213731    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:36:19.225353    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:19.225364    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:19.263222    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:36:19.263237    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:36:19.278212    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:36:19.278222    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:36:19.289469    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:36:19.289480    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:21.803101    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:26.803828    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:26.803997    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:26.819007    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:36:26.819085    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:26.831036    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:36:26.831117    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:26.841603    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:36:26.841675    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:26.851956    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:36:26.852034    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:26.862889    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:36:26.862965    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:26.873834    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:36:26.873908    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:26.884899    5088 logs.go:276] 0 containers: []
	W0723 07:36:26.884910    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:26.884982    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:26.895651    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:36:26.895671    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:26.895677    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:26.924499    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:26.924508    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:26.958197    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:36:26.958208    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:36:26.979018    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:26.979029    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:26.983030    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:36:26.983037    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:36:26.996159    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:36:26.996173    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:36:27.010593    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:36:27.010603    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:36:27.028208    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:36:27.028219    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:36:27.040090    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:36:27.040101    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:36:27.054892    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:36:27.054906    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:36:27.066557    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:36:27.066571    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:36:27.086975    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:27.086987    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:27.111660    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:36:27.111669    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:36:27.126124    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:36:27.126136    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:36:27.137577    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:36:27.137590    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:36:27.157617    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:36:27.157629    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:29.670925    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:34.673277    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:34.673381    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:34.689728    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:36:34.689804    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:34.700694    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:36:34.700772    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:34.711391    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:36:34.711466    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:34.723489    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:36:34.723563    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:34.734545    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:36:34.734619    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:34.745640    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:36:34.745717    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:34.757444    5088 logs.go:276] 0 containers: []
	W0723 07:36:34.757463    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:34.757527    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:34.767869    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:36:34.767887    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:36:34.767892    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:36:34.782762    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:36:34.782771    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:36:34.803533    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:36:34.803544    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:36:34.815365    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:36:34.815375    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:34.828695    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:36:34.828707    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:36:34.843750    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:36:34.843760    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:36:34.857955    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:34.857967    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:34.883132    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:34.883141    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:34.911955    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:34.911965    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:34.948522    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:36:34.948533    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:36:34.960201    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:36:34.960212    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:36:34.980877    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:36:34.980888    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:36:34.999169    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:34.999181    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:35.003800    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:36:35.003807    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:36:35.017956    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:36:35.017967    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:36:35.031571    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:36:35.031580    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:36:37.544980    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:42.547309    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:42.547568    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:42.574658    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:36:42.574798    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:42.592293    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:36:42.592388    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:42.613396    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:36:42.613473    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:42.624393    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:36:42.624468    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:42.634723    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:36:42.634794    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:42.645661    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:36:42.645729    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:42.658287    5088 logs.go:276] 0 containers: []
	W0723 07:36:42.658298    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:42.658355    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:42.669172    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:36:42.669194    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:36:42.669200    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:36:42.689756    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:36:42.689769    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:36:42.713287    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:36:42.713298    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:36:42.726834    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:42.726845    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:42.762707    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:36:42.762718    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:36:42.777911    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:36:42.777922    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:36:42.792718    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:36:42.792729    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:42.804928    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:36:42.804944    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:36:42.820073    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:36:42.820086    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:36:42.833099    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:42.833111    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:42.837159    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:36:42.837166    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:36:42.851818    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:36:42.851830    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:36:42.869152    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:36:42.869164    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:36:42.899269    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:36:42.899281    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:36:42.911312    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:42.911322    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:42.935120    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:42.935132    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:45.466550    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:50.468854    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:50.469184    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:50.511013    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:36:50.511201    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:50.533856    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:36:50.533973    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:50.548682    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:36:50.548751    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:50.561096    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:36:50.561175    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:50.572244    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:36:50.572315    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:50.582903    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:36:50.582980    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:50.593348    5088 logs.go:276] 0 containers: []
	W0723 07:36:50.593360    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:50.593419    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:50.603745    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:36:50.603766    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:50.603772    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:50.638727    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:36:50.638738    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:36:50.653725    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:36:50.653735    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:36:50.672198    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:50.672210    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:50.699620    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:50.699630    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:50.703747    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:36:50.703757    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:36:50.719062    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:36:50.719087    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:36:50.733714    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:36:50.733724    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:36:50.751481    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:36:50.751494    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:36:50.762773    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:36:50.762789    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:36:50.775399    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:36:50.775412    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:36:50.797866    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:36:50.797880    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:36:50.817159    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:36:50.817170    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:36:50.829912    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:50.829924    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:50.859983    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:36:50.859993    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:36:50.877993    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:36:50.878005    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:36:53.397448    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:36:58.399707    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:36:58.399882    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:36:58.426464    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:36:58.426592    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:36:58.445689    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:36:58.445792    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:36:58.458809    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:36:58.458880    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:36:58.470268    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:36:58.470331    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:36:58.480684    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:36:58.480777    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:36:58.492763    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:36:58.492839    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:36:58.503047    5088 logs.go:276] 0 containers: []
	W0723 07:36:58.503059    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:36:58.503118    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:36:58.513085    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:36:58.513102    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:36:58.513107    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:36:58.527295    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:36:58.527305    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:36:58.538812    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:36:58.538823    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:36:58.550297    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:36:58.550306    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:36:58.571566    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:36:58.571578    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:36:58.587064    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:36:58.587076    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:36:58.611089    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:36:58.611097    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:36:58.639439    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:36:58.639448    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:36:58.653671    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:36:58.653684    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:36:58.675480    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:36:58.675490    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:36:58.693559    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:36:58.693569    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:36:58.711446    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:36:58.711457    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:36:58.716170    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:36:58.716177    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:36:58.754077    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:36:58.754090    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:36:58.767501    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:36:58.767513    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:36:58.784412    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:36:58.784424    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:01.297945    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:06.300338    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:06.300855    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:06.338621    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:37:06.338734    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:06.358140    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:37:06.358242    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:06.377075    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:37:06.377153    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:06.389263    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:37:06.389333    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:06.400157    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:37:06.400226    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:06.411342    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:37:06.411417    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:06.422242    5088 logs.go:276] 0 containers: []
	W0723 07:37:06.422254    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:06.422315    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:06.433192    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:37:06.433212    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:37:06.433219    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:37:06.445993    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:06.446004    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:06.471033    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:06.471041    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:06.506868    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:37:06.506879    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:37:06.519282    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:37:06.519293    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:37:06.531136    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:06.531146    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:06.535682    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:37:06.535689    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:37:06.551893    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:37:06.551904    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:37:06.568671    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:06.568681    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:06.597998    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:37:06.598005    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:37:06.613796    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:37:06.613811    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:37:06.631204    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:37:06.631216    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:37:06.652055    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:37:06.652066    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:06.663753    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:37:06.663765    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:37:06.679312    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:37:06.679329    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:37:06.692203    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:37:06.692213    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:37:09.209288    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:14.211010    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:14.211213    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:14.223050    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:37:14.223125    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:14.237685    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:37:14.237754    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:14.248935    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:37:14.249003    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:14.260645    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:37:14.260718    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:14.272161    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:37:14.272228    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:14.286124    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:37:14.286191    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:14.295840    5088 logs.go:276] 0 containers: []
	W0723 07:37:14.295853    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:14.295906    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:14.306253    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:37:14.306267    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:14.306272    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:14.329989    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:14.329996    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:14.334019    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:37:14.334028    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:37:14.348395    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:37:14.348406    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:37:14.362441    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:37:14.362455    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:37:14.377425    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:37:14.377436    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:37:14.388780    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:14.388791    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:14.417087    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:37:14.417096    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:37:14.429649    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:37:14.429662    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:37:14.450810    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:37:14.450821    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:14.462524    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:37:14.462536    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:37:14.482942    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:37:14.482953    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:37:14.496970    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:37:14.496981    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:37:14.513725    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:37:14.513736    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:37:14.524796    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:14.524810    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:14.559040    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:37:14.559050    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:37:17.086243    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:22.088467    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:22.088652    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:22.106121    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:37:22.106207    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:22.119411    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:37:22.119486    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:22.131124    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:37:22.131199    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:22.142318    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:37:22.142386    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:22.152796    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:37:22.152873    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:22.163173    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:37:22.163246    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:22.173289    5088 logs.go:276] 0 containers: []
	W0723 07:37:22.173300    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:22.173361    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:22.183177    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:37:22.183195    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:22.183201    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:22.218792    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:37:22.218805    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:37:22.231931    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:37:22.231941    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:37:22.249832    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:22.249842    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:22.280250    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:22.280257    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:22.284066    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:37:22.284073    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:37:22.298071    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:37:22.298083    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:37:22.309573    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:37:22.309584    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:37:22.323147    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:37:22.323157    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:37:22.335044    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:37:22.335055    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:37:22.346656    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:22.346666    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:22.369494    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:37:22.369502    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:22.381749    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:37:22.381764    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:37:22.395706    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:37:22.395718    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:37:22.416296    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:37:22.416306    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:37:22.431555    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:37:22.431569    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:37:24.953858    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:29.955955    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:29.956147    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:29.977871    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:37:29.977980    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:29.994057    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:37:29.994142    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:30.007058    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:37:30.007136    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:30.018124    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:37:30.018193    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:30.028391    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:37:30.028454    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:30.039195    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:37:30.039254    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:30.049311    5088 logs.go:276] 0 containers: []
	W0723 07:37:30.049321    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:30.049376    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:30.059555    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:37:30.059574    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:37:30.059579    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:30.071073    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:37:30.071084    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:37:30.085132    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:37:30.085147    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:37:30.097130    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:37:30.097142    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:37:30.113807    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:37:30.113821    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:37:30.130708    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:30.130718    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:30.159682    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:30.159692    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:30.163932    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:30.163939    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:30.199151    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:37:30.199162    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:37:30.220126    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:37:30.220133    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:37:30.235191    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:37:30.235202    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:37:30.249775    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:37:30.249785    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:37:30.263983    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:37:30.263995    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:37:30.278394    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:37:30.278404    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:37:30.290382    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:37:30.290392    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:37:30.302159    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:30.302170    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:32.827571    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:37.829785    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:37.830032    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:37.845446    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:37:37.845532    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:37.857697    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:37:37.857777    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:37.881292    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:37:37.881362    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:37.892890    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:37:37.892961    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:37.903715    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:37:37.903796    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:37.914557    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:37:37.914626    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:37.926550    5088 logs.go:276] 0 containers: []
	W0723 07:37:37.926564    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:37.926629    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:37.937072    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:37:37.937091    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:37:37.937096    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:37:37.950152    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:37:37.950164    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:37:37.970549    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:37.970559    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:37.996793    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:37:37.996802    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:38.008358    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:38.008371    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:38.043062    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:37:38.043073    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:37:38.056210    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:38.056220    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:38.060538    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:37:38.060548    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:37:38.074830    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:37:38.074841    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:37:38.090315    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:37:38.090326    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:37:38.107589    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:37:38.107600    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:37:38.124917    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:37:38.124930    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:37:38.136935    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:38.136945    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:38.167152    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:37:38.167160    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:37:38.183178    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:37:38.183189    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:37:38.194312    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:37:38.194325    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:37:40.708448    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:45.709566    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:45.709712    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:45.721938    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:37:45.722001    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:45.732278    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:37:45.732348    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:45.742851    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:37:45.742922    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:45.753626    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:37:45.753701    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:45.764011    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:37:45.764083    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:45.777874    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:37:45.777947    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:45.787687    5088 logs.go:276] 0 containers: []
	W0723 07:37:45.787698    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:45.787757    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:45.798459    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:37:45.798476    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:45.798483    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:45.834012    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:37:45.834027    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:37:45.849959    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:37:45.849970    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:37:45.867555    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:45.867567    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:45.898374    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:37:45.898386    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:37:45.919315    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:37:45.919332    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:37:45.938969    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:37:45.938986    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:37:45.950243    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:45.950256    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:45.955109    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:37:45.955117    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:37:45.966333    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:37:45.966343    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:37:45.977779    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:45.977794    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:46.002964    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:37:46.002972    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:46.015309    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:37:46.015321    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:37:46.037262    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:37:46.037273    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:37:46.054991    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:37:46.055013    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:37:46.077015    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:37:46.077028    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:37:48.596934    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:37:53.597546    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:37:53.597748    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:37:53.617954    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:37:53.618070    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:37:53.632440    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:37:53.632514    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:37:53.644120    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:37:53.644193    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:37:53.655185    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:37:53.655251    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:37:53.665866    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:37:53.665930    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:37:53.680437    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:37:53.680503    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:37:53.693476    5088 logs.go:276] 0 containers: []
	W0723 07:37:53.693490    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:37:53.693545    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:37:53.704427    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:37:53.704447    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:37:53.704453    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:37:53.731493    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:37:53.731503    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:37:53.743672    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:37:53.743683    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:37:53.783154    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:37:53.783166    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:37:53.797972    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:37:53.797984    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:37:53.822121    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:37:53.822132    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:37:53.833955    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:37:53.833965    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:37:53.838231    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:37:53.838237    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:37:53.856062    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:37:53.856072    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:37:53.871636    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:37:53.871646    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:37:53.886607    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:37:53.886622    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:37:53.899692    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:37:53.899706    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:37:53.910933    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:37:53.910944    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:37:53.922322    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:37:53.922334    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:37:53.952487    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:37:53.952495    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:37:53.979959    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:37:53.979969    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:37:56.504854    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:01.505275    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:01.505492    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:01.523369    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:38:01.523449    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:01.537137    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:38:01.537219    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:01.548391    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:38:01.548463    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:01.559493    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:38:01.559561    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:01.570203    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:38:01.570272    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:01.580766    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:38:01.580846    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:01.591107    5088 logs.go:276] 0 containers: []
	W0723 07:38:01.591123    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:01.591183    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:01.602267    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:38:01.602286    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:38:01.602291    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:38:01.623134    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:38:01.623148    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:38:01.636647    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:01.636662    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:01.659485    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:38:01.659494    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:38:01.673835    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:38:01.673848    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:38:01.685322    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:38:01.685333    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:38:01.699653    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:38:01.699668    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:38:01.717150    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:38:01.717161    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:38:01.734472    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:38:01.734489    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:38:01.746020    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:01.746031    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:01.776052    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:01.776060    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:01.779959    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:38:01.779968    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:38:01.794105    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:38:01.794118    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:01.806052    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:01.806062    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:01.841394    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:38:01.841408    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:38:01.855309    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:38:01.855319    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:38:04.373114    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:09.375456    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:09.375697    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:09.405186    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:38:09.405276    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:09.417718    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:38:09.417796    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:09.432413    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:38:09.432481    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:09.443011    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:38:09.443083    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:09.453569    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:38:09.453641    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:09.463639    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:38:09.463717    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:09.473680    5088 logs.go:276] 0 containers: []
	W0723 07:38:09.473694    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:09.473752    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:09.486562    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:38:09.486580    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:09.486585    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:09.509948    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:38:09.509966    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:09.521691    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:09.521701    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:09.550361    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:38:09.550372    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:38:09.564972    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:38:09.564982    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:38:09.585549    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:38:09.585560    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:38:09.600752    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:38:09.600766    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:38:09.612782    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:38:09.612795    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:38:09.623728    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:38:09.623742    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:38:09.636061    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:38:09.636072    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:38:09.653817    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:38:09.653827    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:38:09.671168    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:09.671182    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:09.706732    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:38:09.706744    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:38:09.721028    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:38:09.721039    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:38:09.733861    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:09.733872    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:09.738193    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:38:09.738203    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:38:12.258746    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:17.260917    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:17.261081    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:17.272196    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:38:17.272285    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:17.283594    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:38:17.283666    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:17.294352    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:38:17.294432    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:17.305629    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:38:17.305697    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:17.316693    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:38:17.316768    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:17.333006    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:38:17.333080    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:17.343098    5088 logs.go:276] 0 containers: []
	W0723 07:38:17.343110    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:17.343161    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:17.353913    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:38:17.353932    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:17.353938    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:17.376580    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:17.376588    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:17.404444    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:38:17.404454    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:38:17.418301    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:38:17.418314    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:38:17.429566    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:38:17.429578    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:38:17.450408    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:38:17.450419    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:38:17.467791    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:17.467802    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:17.471879    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:38:17.471886    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:38:17.491882    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:38:17.491893    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:38:17.503617    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:38:17.503628    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:17.517382    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:17.517393    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:17.552446    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:38:17.552456    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:38:17.565520    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:38:17.565531    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:38:17.578070    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:38:17.578083    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:38:17.596093    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:38:17.596110    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:38:17.610516    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:38:17.610531    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:38:20.128539    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:25.130728    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:25.131091    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:38:25.171119    5088 logs.go:276] 2 containers: [d1d8817c0859 560c1f1e59ec]
	I0723 07:38:25.171264    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:38:25.195935    5088 logs.go:276] 2 containers: [27e375b200ec e624895bef16]
	I0723 07:38:25.196041    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:38:25.210308    5088 logs.go:276] 1 containers: [2def5d6b1b82]
	I0723 07:38:25.210389    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:38:25.227466    5088 logs.go:276] 2 containers: [c51b35284718 b1c9f2805558]
	I0723 07:38:25.227536    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:38:25.243101    5088 logs.go:276] 1 containers: [6d757b3adf50]
	I0723 07:38:25.243175    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:38:25.258035    5088 logs.go:276] 2 containers: [d59da3733f24 a6aa29de3033]
	I0723 07:38:25.258108    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:38:25.268617    5088 logs.go:276] 0 containers: []
	W0723 07:38:25.268632    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:38:25.268697    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:38:25.279742    5088 logs.go:276] 1 containers: [ab352f4b063d]
	I0723 07:38:25.279762    5088 logs.go:123] Gathering logs for kube-controller-manager [a6aa29de3033] ...
	I0723 07:38:25.279769    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6aa29de3033"
	I0723 07:38:25.324794    5088 logs.go:123] Gathering logs for storage-provisioner [ab352f4b063d] ...
	I0723 07:38:25.324804    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab352f4b063d"
	I0723 07:38:25.340185    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:38:25.340200    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:38:25.369610    5088 logs.go:123] Gathering logs for etcd [27e375b200ec] ...
	I0723 07:38:25.369621    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e375b200ec"
	I0723 07:38:25.383344    5088 logs.go:123] Gathering logs for etcd [e624895bef16] ...
	I0723 07:38:25.383357    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e624895bef16"
	I0723 07:38:25.400064    5088 logs.go:123] Gathering logs for kube-scheduler [b1c9f2805558] ...
	I0723 07:38:25.400076    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c9f2805558"
	I0723 07:38:25.419203    5088 logs.go:123] Gathering logs for kube-proxy [6d757b3adf50] ...
	I0723 07:38:25.419212    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d757b3adf50"
	I0723 07:38:25.431327    5088 logs.go:123] Gathering logs for kube-controller-manager [d59da3733f24] ...
	I0723 07:38:25.431338    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d59da3733f24"
	I0723 07:38:25.448676    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:38:25.448686    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:38:25.482671    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:38:25.482684    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:38:25.506056    5088 logs.go:123] Gathering logs for kube-apiserver [d1d8817c0859] ...
	I0723 07:38:25.506065    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1d8817c0859"
	I0723 07:38:25.520028    5088 logs.go:123] Gathering logs for coredns [2def5d6b1b82] ...
	I0723 07:38:25.520039    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2def5d6b1b82"
	I0723 07:38:25.531895    5088 logs.go:123] Gathering logs for kube-scheduler [c51b35284718] ...
	I0723 07:38:25.531908    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c51b35284718"
	I0723 07:38:25.552848    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:38:25.552857    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:38:25.557022    5088 logs.go:123] Gathering logs for kube-apiserver [560c1f1e59ec] ...
	I0723 07:38:25.557029    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560c1f1e59ec"
	I0723 07:38:25.571611    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:38:25.571623    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:38:28.085025    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:33.087506    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:33.087582    5088 kubeadm.go:597] duration metric: took 4m3.241700208s to restartPrimaryControlPlane
	W0723 07:38:33.087646    5088 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 07:38:33.087674    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0723 07:38:34.061038    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 07:38:34.066039    5088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 07:38:34.069166    5088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 07:38:34.072090    5088 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 07:38:34.072096    5088 kubeadm.go:157] found existing configuration files:
	
	I0723 07:38:34.072121    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf
	I0723 07:38:34.074940    5088 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 07:38:34.074964    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 07:38:34.078405    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf
	I0723 07:38:34.081438    5088 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 07:38:34.081458    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 07:38:34.084094    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf
	I0723 07:38:34.086918    5088 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 07:38:34.086940    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 07:38:34.090058    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf
	I0723 07:38:34.092919    5088 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 07:38:34.092943    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 07:38:34.095391    5088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 07:38:34.113811    5088 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0723 07:38:34.113840    5088 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 07:38:34.167626    5088 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 07:38:34.167682    5088 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 07:38:34.167740    5088 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 07:38:34.217111    5088 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 07:38:34.221328    5088 out.go:204]   - Generating certificates and keys ...
	I0723 07:38:34.221420    5088 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 07:38:34.221590    5088 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 07:38:34.221629    5088 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 07:38:34.221684    5088 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 07:38:34.221721    5088 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 07:38:34.221746    5088 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 07:38:34.221786    5088 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 07:38:34.221827    5088 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 07:38:34.221858    5088 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 07:38:34.221890    5088 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 07:38:34.221911    5088 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 07:38:34.221961    5088 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 07:38:34.276843    5088 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 07:38:34.330045    5088 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 07:38:34.424385    5088 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 07:38:34.494245    5088 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 07:38:34.530972    5088 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 07:38:34.531404    5088 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 07:38:34.531477    5088 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 07:38:34.618091    5088 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 07:38:34.622323    5088 out.go:204]   - Booting up control plane ...
	I0723 07:38:34.622368    5088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 07:38:34.622408    5088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 07:38:34.622440    5088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 07:38:34.622477    5088 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 07:38:34.623528    5088 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 07:38:39.125089    5088 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501565 seconds
	I0723 07:38:39.125161    5088 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 07:38:39.130226    5088 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 07:38:39.638458    5088 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 07:38:39.638672    5088 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-462000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 07:38:40.162303    5088 kubeadm.go:310] [bootstrap-token] Using token: caxbkx.4eufilgif7aprm3a
	I0723 07:38:40.167200    5088 out.go:204]   - Configuring RBAC rules ...
	I0723 07:38:40.167416    5088 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 07:38:40.168696    5088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 07:38:40.174255    5088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 07:38:40.176659    5088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 07:38:40.178766    5088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 07:38:40.180823    5088 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 07:38:40.188079    5088 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 07:38:40.371554    5088 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 07:38:40.570882    5088 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 07:38:40.571496    5088 kubeadm.go:310] 
	I0723 07:38:40.571595    5088 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 07:38:40.571621    5088 kubeadm.go:310] 
	I0723 07:38:40.571761    5088 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 07:38:40.571774    5088 kubeadm.go:310] 
	I0723 07:38:40.571802    5088 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 07:38:40.571896    5088 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 07:38:40.571959    5088 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 07:38:40.571965    5088 kubeadm.go:310] 
	I0723 07:38:40.572039    5088 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 07:38:40.572049    5088 kubeadm.go:310] 
	I0723 07:38:40.572155    5088 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 07:38:40.572166    5088 kubeadm.go:310] 
	I0723 07:38:40.572194    5088 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 07:38:40.572238    5088 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 07:38:40.572359    5088 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 07:38:40.572383    5088 kubeadm.go:310] 
	I0723 07:38:40.572434    5088 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 07:38:40.572475    5088 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 07:38:40.572483    5088 kubeadm.go:310] 
	I0723 07:38:40.572524    5088 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token caxbkx.4eufilgif7aprm3a \
	I0723 07:38:40.572575    5088 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:29adbcbc0a6bf2a081f567e258fc4ee09254f17c26f802d72ace65c98bb575cd \
	I0723 07:38:40.572603    5088 kubeadm.go:310] 	--control-plane 
	I0723 07:38:40.572606    5088 kubeadm.go:310] 
	I0723 07:38:40.572647    5088 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 07:38:40.572654    5088 kubeadm.go:310] 
	I0723 07:38:40.572702    5088 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token caxbkx.4eufilgif7aprm3a \
	I0723 07:38:40.572758    5088 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:29adbcbc0a6bf2a081f567e258fc4ee09254f17c26f802d72ace65c98bb575cd 
	I0723 07:38:40.572821    5088 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 07:38:40.572830    5088 cni.go:84] Creating CNI manager for ""
	I0723 07:38:40.572839    5088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:38:40.576754    5088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 07:38:40.579893    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 07:38:40.582806    5088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 07:38:40.587608    5088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 07:38:40.587648    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 07:38:40.587707    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-462000 minikube.k8s.io/updated_at=2024_07_23T07_38_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=stopped-upgrade-462000 minikube.k8s.io/primary=true
	I0723 07:38:40.618476    5088 kubeadm.go:1113] duration metric: took 30.860417ms to wait for elevateKubeSystemPrivileges
	I0723 07:38:40.618495    5088 ops.go:34] apiserver oom_adj: -16
	I0723 07:38:40.633816    5088 kubeadm.go:394] duration metric: took 4m10.809473583s to StartCluster
	I0723 07:38:40.633835    5088 settings.go:142] acquiring lock: {Name:mkd8f4c38e79948dfc5500ad891e72aa4257d24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:38:40.633918    5088 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:38:40.634335    5088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/kubeconfig: {Name:mkd61b3eb94b798a54b8f29057406aee7268d37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:38:40.634517    5088 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:38:40.634529    5088 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 07:38:40.634566    5088 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-462000"
	I0723 07:38:40.634578    5088 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-462000"
	W0723 07:38:40.634581    5088 addons.go:243] addon storage-provisioner should already be in state true
	I0723 07:38:40.634595    5088 host.go:66] Checking if "stopped-upgrade-462000" exists ...
	I0723 07:38:40.634611    5088 config.go:182] Loaded profile config "stopped-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0723 07:38:40.634656    5088 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-462000"
	I0723 07:38:40.634667    5088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-462000"
	I0723 07:38:40.635638    5088 kapi.go:59] client config for stopped-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/stopped-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/19319-1567/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10399bfa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0723 07:38:40.635750    5088 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-462000"
	W0723 07:38:40.635754    5088 addons.go:243] addon default-storageclass should already be in state true
	I0723 07:38:40.635761    5088 host.go:66] Checking if "stopped-upgrade-462000" exists ...
	I0723 07:38:40.637799    5088 out.go:177] * Verifying Kubernetes components...
	I0723 07:38:40.638225    5088 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 07:38:40.641865    5088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 07:38:40.641873    5088 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0723 07:38:40.645679    5088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 07:38:40.648697    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 07:38:40.651739    5088 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 07:38:40.651745    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 07:38:40.651751    5088 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/stopped-upgrade-462000/id_rsa Username:docker}
	I0723 07:38:40.739879    5088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 07:38:40.744939    5088 api_server.go:52] waiting for apiserver process to appear ...
	I0723 07:38:40.744990    5088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 07:38:40.748748    5088 api_server.go:72] duration metric: took 114.222291ms to wait for apiserver process to appear ...
	I0723 07:38:40.748756    5088 api_server.go:88] waiting for apiserver healthz status ...
	I0723 07:38:40.748763    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:40.793298    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 07:38:40.828337    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 07:38:45.749678    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:45.749706    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:50.750649    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:50.750673    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:38:55.750773    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:38:55.750795    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:00.750952    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:00.750973    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:05.751209    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:05.751238    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:10.751612    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:10.751651    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0723 07:39:11.139437    5088 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0723 07:39:11.142679    5088 out.go:177] * Enabled addons: storage-provisioner
	I0723 07:39:11.152565    5088 addons.go:510] duration metric: took 30.518592708s for enable addons: enabled=[storage-provisioner]
	I0723 07:39:15.752161    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:15.752205    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:20.752915    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:20.752959    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:25.753892    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:25.753924    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:30.755036    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:30.755060    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:35.756427    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:35.756448    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:40.757209    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:40.757412    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:39:40.768426    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:39:40.768509    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:39:40.779410    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:39:40.779487    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:39:40.789660    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:39:40.789732    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:39:40.800627    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:39:40.800698    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:39:40.811504    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:39:40.811575    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:39:40.821963    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:39:40.822040    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:39:40.839516    5088 logs.go:276] 0 containers: []
	W0723 07:39:40.839528    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:39:40.839592    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:39:40.849795    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:39:40.849811    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:39:40.849816    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:39:40.881094    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:39:40.881105    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:39:40.896451    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:39:40.896460    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:39:40.910705    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:39:40.910715    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:39:40.926204    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:39:40.926216    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:39:40.943157    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:39:40.943173    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:39:40.955243    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:39:40.955254    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:39:40.967506    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:39:40.967520    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:39:40.971746    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:39:40.971755    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:39:41.006061    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:39:41.006071    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:39:41.017828    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:39:41.017839    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:39:41.029551    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:39:41.029563    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:39:41.041241    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:39:41.041256    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:39:43.565779    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:48.566887    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:48.567164    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:39:48.589967    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:39:48.590095    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:39:48.610651    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:39:48.610742    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:39:48.622814    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:39:48.622893    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:39:48.634141    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:39:48.634220    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:39:48.644787    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:39:48.644863    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:39:48.655787    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:39:48.655860    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:39:48.669409    5088 logs.go:276] 0 containers: []
	W0723 07:39:48.669420    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:39:48.669486    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:39:48.680326    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:39:48.680343    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:39:48.680348    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:39:48.692307    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:39:48.692320    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:39:48.704205    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:39:48.704217    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:39:48.715663    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:39:48.715675    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:39:48.720494    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:39:48.720502    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:39:48.759419    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:39:48.759432    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:39:48.776719    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:39:48.776730    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:39:48.792827    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:39:48.792839    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:39:48.804807    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:39:48.804819    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:39:48.827040    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:39:48.827054    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:39:48.850706    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:39:48.850715    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:39:48.881552    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:39:48.881561    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:39:48.896235    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:39:48.896245    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:39:51.409478    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:39:56.411056    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:39:56.411299    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:39:56.429582    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:39:56.429673    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:39:56.447101    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:39:56.447178    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:39:56.457380    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:39:56.457452    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:39:56.468064    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:39:56.468136    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:39:56.478445    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:39:56.478512    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:39:56.489264    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:39:56.489333    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:39:56.499989    5088 logs.go:276] 0 containers: []
	W0723 07:39:56.500003    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:39:56.500062    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:39:56.510080    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:39:56.510097    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:39:56.510102    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:39:56.525318    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:39:56.525327    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:39:56.539345    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:39:56.539357    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:39:56.551214    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:39:56.551226    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:39:56.563177    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:39:56.563186    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:39:56.580398    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:39:56.580408    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:39:56.613344    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:39:56.613353    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:39:56.617876    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:39:56.617882    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:39:56.652306    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:39:56.652318    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:39:56.665241    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:39:56.665252    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:39:56.676919    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:39:56.676929    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:39:56.692349    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:39:56.692364    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:39:56.703764    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:39:56.703775    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:39:59.229560    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:04.231563    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:04.231858    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:04.262743    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:40:04.262870    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:04.282296    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:40:04.282399    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:04.296212    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:40:04.296284    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:04.308147    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:40:04.308221    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:04.319376    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:40:04.319454    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:04.331298    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:40:04.331372    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:04.342273    5088 logs.go:276] 0 containers: []
	W0723 07:40:04.342285    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:04.342355    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:04.353544    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:40:04.353561    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:04.353566    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:04.387303    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:40:04.387313    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:40:04.401669    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:40:04.401684    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:40:04.413583    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:04.413593    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:04.438105    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:40:04.438118    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:40:04.451618    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:04.451632    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:04.456434    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:04.456444    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:04.491891    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:40:04.491902    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:40:04.506404    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:40:04.506415    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:40:04.517919    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:40:04.517929    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:40:04.534730    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:40:04.534742    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:40:04.546896    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:40:04.546905    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:40:04.568855    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:40:04.568865    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:07.081842    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:12.083781    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:12.083933    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:12.098224    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:40:12.098313    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:12.109820    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:40:12.109894    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:12.121497    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:40:12.121571    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:12.132852    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:40:12.132924    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:12.143699    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:40:12.143769    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:12.154223    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:40:12.154291    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:12.164346    5088 logs.go:276] 0 containers: []
	W0723 07:40:12.164361    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:12.164419    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:12.174818    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:40:12.174834    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:12.174840    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:12.198746    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:40:12.198754    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:12.210353    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:40:12.210366    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:40:12.222179    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:40:12.222193    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:40:12.234181    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:40:12.234193    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:40:12.255749    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:40:12.255759    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:40:12.270591    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:40:12.270601    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:40:12.284112    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:40:12.284121    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:40:12.296007    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:40:12.296019    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:40:12.311269    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:40:12.311279    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:40:12.330260    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:12.330271    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:12.362618    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:12.362626    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:12.367238    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:12.367248    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:14.902702    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:19.904703    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:19.904848    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:19.916260    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:40:19.916345    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:19.927075    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:40:19.927140    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:19.938044    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:40:19.938118    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:19.948998    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:40:19.949070    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:19.959460    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:40:19.959532    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:19.972293    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:40:19.972363    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:19.983114    5088 logs.go:276] 0 containers: []
	W0723 07:40:19.983126    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:19.983187    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:19.994292    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:40:19.994308    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:40:19.994313    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:20.007118    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:40:20.007128    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:40:20.022130    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:40:20.022141    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:40:20.033633    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:40:20.033644    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:40:20.045575    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:40:20.045586    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:40:20.060794    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:40:20.060804    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:40:20.080708    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:20.080719    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:20.105634    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:20.105644    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:20.137915    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:20.137926    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:20.142538    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:20.142546    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:20.176836    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:40:20.176847    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:40:20.191038    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:40:20.191054    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:40:20.202812    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:40:20.202823    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:40:22.719755    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:27.722064    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:27.722274    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:27.738363    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:40:27.738444    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:27.751641    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:40:27.751723    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:27.762945    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:40:27.763015    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:27.773606    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:40:27.773685    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:27.788481    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:40:27.788559    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:27.799074    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:40:27.799144    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:27.809069    5088 logs.go:276] 0 containers: []
	W0723 07:40:27.809079    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:27.809136    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:27.819232    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:40:27.819247    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:27.819252    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:27.823513    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:27.823520    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:27.862608    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:40:27.862622    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:40:27.880621    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:40:27.880633    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:40:27.896693    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:27.896705    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:27.921353    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:40:27.921363    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:27.933099    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:27.933110    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:27.965094    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:40:27.965119    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:40:27.979693    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:40:27.979705    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:40:27.993736    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:40:27.993749    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:40:28.017714    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:40:28.017728    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:40:28.031609    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:40:28.031622    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:40:28.048995    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:40:28.049005    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:40:30.563956    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:35.566194    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:35.566379    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:35.584995    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:40:35.585077    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:35.597126    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:40:35.597198    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:35.607766    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:40:35.607843    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:35.618702    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:40:35.618777    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:35.629582    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:40:35.629653    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:35.640088    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:40:35.640158    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:35.650799    5088 logs.go:276] 0 containers: []
	W0723 07:40:35.650813    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:35.650870    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:35.667675    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:40:35.667692    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:40:35.667696    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:40:35.682032    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:40:35.682043    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:40:35.693677    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:40:35.693688    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:40:35.716119    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:40:35.716130    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:40:35.727273    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:35.727285    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:35.752638    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:35.752647    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:35.785083    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:35.785092    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:35.789292    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:35.789297    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:35.824919    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:40:35.824928    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:40:35.840264    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:40:35.840278    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:40:35.851987    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:40:35.851998    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:40:35.864260    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:40:35.864270    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:40:35.882129    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:40:35.882145    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:38.396523    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:43.398758    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:43.398960    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:43.424518    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:40:43.424648    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:43.441839    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:40:43.441925    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:43.455416    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:40:43.455490    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:43.466754    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:40:43.466819    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:43.476827    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:40:43.476891    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:43.487497    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:40:43.487557    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:43.497656    5088 logs.go:276] 0 containers: []
	W0723 07:40:43.497669    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:43.497720    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:43.508123    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:40:43.508143    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:43.508147    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:43.542552    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:40:43.542563    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:40:43.557567    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:43.557578    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:43.583042    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:40:43.583050    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:43.595520    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:40:43.595532    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:40:43.609852    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:40:43.609865    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:40:43.627354    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:40:43.627365    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:40:43.639976    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:40:43.639987    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:40:43.657278    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:43.657290    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:43.689649    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:43.689657    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:43.694442    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:40:43.694451    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:40:43.708949    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:40:43.708960    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:40:43.720903    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:40:43.720914    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:40:46.240591    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:51.240906    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:51.241057    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:51.261181    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:40:51.261264    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:51.272854    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:40:51.272926    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:51.283388    5088 logs.go:276] 2 containers: [677323d7575f ca7761b8cbf2]
	I0723 07:40:51.283456    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:51.293959    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:40:51.294019    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:51.304140    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:40:51.304204    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:51.320390    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:40:51.320459    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:51.330860    5088 logs.go:276] 0 containers: []
	W0723 07:40:51.330871    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:51.330923    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:51.347084    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:40:51.347105    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:51.347110    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:51.371678    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:40:51.371688    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:51.382784    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:51.382797    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:51.414991    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:40:51.415003    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:40:51.429386    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:40:51.429399    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:40:51.441025    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:40:51.441036    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:40:51.452132    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:40:51.452143    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:40:51.466907    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:40:51.466916    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:40:51.478783    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:51.478796    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:51.483284    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:51.483291    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:51.517698    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:40:51.517713    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:40:51.531969    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:40:51.531982    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:40:51.549814    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:40:51.549824    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:40:54.063483    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:40:59.065608    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:40:59.065892    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:40:59.098265    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:40:59.098366    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:40:59.112754    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:40:59.112836    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:40:59.124997    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:40:59.125071    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:40:59.135881    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:40:59.135953    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:40:59.146770    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:40:59.146844    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:40:59.157503    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:40:59.157576    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:40:59.167486    5088 logs.go:276] 0 containers: []
	W0723 07:40:59.167500    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:40:59.167560    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:40:59.177224    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:40:59.177246    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:40:59.177252    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:40:59.194316    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:40:59.194329    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:40:59.219893    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:40:59.219903    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:40:59.231029    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:40:59.231039    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:40:59.265109    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:40:59.265132    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:40:59.277213    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:40:59.277224    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:40:59.291322    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:40:59.291333    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:40:59.295755    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:40:59.295763    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:40:59.307064    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:40:59.307074    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:40:59.318692    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:40:59.318704    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:40:59.336194    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:40:59.336204    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:40:59.368532    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:40:59.368541    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:40:59.379905    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:40:59.379917    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:40:59.395152    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:40:59.395162    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:40:59.407462    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:40:59.407475    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:41:01.923213    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:06.924268    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:06.924477    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:06.946272    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:41:06.946370    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:06.961007    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:41:06.961086    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:06.972880    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:41:06.972956    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:06.983359    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:41:06.983427    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:06.993654    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:41:06.993719    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:07.007691    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:41:07.007761    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:07.018000    5088 logs.go:276] 0 containers: []
	W0723 07:41:07.018012    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:07.018067    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:07.028398    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:41:07.028415    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:07.028421    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:07.059116    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:07.059125    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:07.099334    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:41:07.099345    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:41:07.114011    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:41:07.114022    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:41:07.126431    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:07.126442    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:07.131294    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:41:07.131303    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:41:07.143452    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:41:07.143462    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:41:07.158617    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:41:07.158628    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:41:07.172768    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:41:07.172780    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:41:07.184913    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:41:07.184924    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:41:07.203095    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:07.203109    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:07.228259    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:41:07.228268    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:41:07.250533    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:41:07.250544    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:41:07.262819    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:41:07.262833    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:41:07.274361    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:41:07.274372    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:09.788098    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:14.790308    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:14.790467    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:14.801735    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:41:14.801810    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:14.812520    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:41:14.812590    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:14.822710    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:41:14.822790    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:14.833260    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:41:14.833332    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:14.844069    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:41:14.844135    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:14.854833    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:41:14.854911    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:14.865249    5088 logs.go:276] 0 containers: []
	W0723 07:41:14.865262    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:14.865318    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:14.875270    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:41:14.875288    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:41:14.875292    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:41:14.889127    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:41:14.889137    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:41:14.900417    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:41:14.900426    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:41:14.912092    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:14.912105    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:14.945031    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:41:14.945040    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:41:14.960228    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:41:14.960242    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:41:14.974826    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:14.974836    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:14.999228    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:14.999239    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:15.003643    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:41:15.003652    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:41:15.014976    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:41:15.014987    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:41:15.027184    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:41:15.027196    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:41:15.039914    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:15.039925    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:15.076141    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:41:15.076155    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:41:15.098370    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:41:15.098380    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:15.111787    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:41:15.111802    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:41:17.625803    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:22.628009    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:22.628148    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:22.639287    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:41:22.639367    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:22.650429    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:41:22.650508    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:22.661320    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:41:22.661396    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:22.671688    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:41:22.671752    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:22.681983    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:41:22.682054    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:22.693025    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:41:22.693090    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:22.703500    5088 logs.go:276] 0 containers: []
	W0723 07:41:22.703517    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:22.703579    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:22.714645    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:41:22.714662    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:41:22.714667    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:41:22.727025    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:41:22.727037    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:41:22.738688    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:41:22.738699    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:41:22.756667    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:41:22.756681    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:41:22.768113    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:22.768123    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:22.793631    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:22.793643    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:22.825997    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:22.826005    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:22.830523    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:22.830529    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:22.875507    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:41:22.875518    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:22.887358    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:41:22.887368    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:41:22.901981    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:41:22.901993    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:41:22.919422    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:41:22.919433    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:41:22.933745    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:41:22.933759    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:41:22.946058    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:41:22.946069    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:41:22.960770    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:41:22.960782    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:41:25.473963    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:30.476264    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:30.476405    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:30.495178    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:41:30.495272    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:30.509240    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:41:30.509311    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:30.529139    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:41:30.529209    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:30.540211    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:41:30.540280    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:30.553026    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:41:30.553097    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:30.564360    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:41:30.564431    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:30.574606    5088 logs.go:276] 0 containers: []
	W0723 07:41:30.574622    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:30.574684    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:30.585161    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:41:30.585181    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:30.585186    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:30.620915    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:41:30.620926    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:41:30.641922    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:41:30.641934    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:30.657116    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:41:30.657130    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:41:30.677130    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:41:30.677144    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:41:30.694707    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:30.694718    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:30.726286    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:41:30.726295    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:41:30.738019    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:41:30.738031    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:41:30.753259    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:41:30.753271    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:41:30.765775    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:41:30.765786    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:41:30.778467    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:30.778477    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:30.802712    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:30.802720    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:30.806633    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:41:30.806641    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:41:30.818154    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:41:30.818166    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:41:30.830139    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:41:30.830151    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:41:33.344204    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:38.346167    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:38.346389    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:38.378919    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:41:38.379016    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:38.393392    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:41:38.393472    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:38.405308    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:41:38.405386    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:38.415567    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:41:38.415634    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:38.431569    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:41:38.431642    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:38.457736    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:41:38.457810    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:38.469509    5088 logs.go:276] 0 containers: []
	W0723 07:41:38.469522    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:38.469583    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:38.480351    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:41:38.480369    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:38.480373    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:38.511279    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:41:38.511287    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:41:38.522242    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:41:38.522252    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:41:38.534634    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:41:38.534643    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:41:38.550761    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:41:38.550772    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:41:38.562670    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:41:38.562680    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:41:38.577882    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:41:38.577894    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:41:38.589779    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:38.589796    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:38.614134    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:38.614143    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:38.618841    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:41:38.618849    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:41:38.632932    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:41:38.632946    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:41:38.650846    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:41:38.650855    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:38.662666    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:38.662681    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:38.697297    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:41:38.697312    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:41:38.709439    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:41:38.709451    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:41:41.223632    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:46.225880    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:46.226028    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:46.241567    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:41:46.241657    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:46.254949    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:41:46.255027    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:46.266282    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:41:46.266356    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:46.276707    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:41:46.276782    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:46.287643    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:41:46.287715    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:46.298261    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:41:46.298334    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:46.308163    5088 logs.go:276] 0 containers: []
	W0723 07:41:46.308173    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:46.308237    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:46.318780    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:41:46.318799    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:41:46.318804    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:41:46.334013    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:41:46.334023    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:41:46.345941    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:46.345951    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:46.369736    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:41:46.369745    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:41:46.381471    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:41:46.381481    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:41:46.393131    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:41:46.393142    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:41:46.404939    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:41:46.404948    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:46.416820    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:46.416830    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:46.448375    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:41:46.448392    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:41:46.462584    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:41:46.462594    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:41:46.476811    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:41:46.476821    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:41:46.503253    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:41:46.503264    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:41:46.514768    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:41:46.514778    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:41:46.526052    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:46.526061    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:46.530520    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:46.530527    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:49.072619    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:41:54.074873    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:41:54.074973    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:41:54.087997    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:41:54.088078    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:41:54.098800    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:41:54.098862    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:41:54.116371    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:41:54.116439    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:41:54.127408    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:41:54.127471    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:41:54.138415    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:41:54.138491    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:41:54.154277    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:41:54.154343    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:41:54.167931    5088 logs.go:276] 0 containers: []
	W0723 07:41:54.167943    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:41:54.167995    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:41:54.184434    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:41:54.184451    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:41:54.184455    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:41:54.195961    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:41:54.195976    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:41:54.207521    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:41:54.207535    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:41:54.219099    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:41:54.219111    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:41:54.231656    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:41:54.231667    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:41:54.267271    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:41:54.267281    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:41:54.280953    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:41:54.280964    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:41:54.292735    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:41:54.292747    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:41:54.296714    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:41:54.296723    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:41:54.308248    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:41:54.308259    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:41:54.326290    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:41:54.326301    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:41:54.351501    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:41:54.351511    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:41:54.383618    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:41:54.383628    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:41:54.398395    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:41:54.398406    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:41:54.411894    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:41:54.411909    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:41:56.933473    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:01.936232    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:01.936590    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:01.971287    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:42:01.971431    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:01.990580    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:42:01.990667    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:02.005159    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:42:02.005237    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:02.017937    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:42:02.018008    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:02.031835    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:42:02.031905    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:02.048143    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:42:02.048217    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:02.064025    5088 logs.go:276] 0 containers: []
	W0723 07:42:02.064037    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:02.064093    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:02.075302    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:42:02.075324    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:02.075330    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:02.109265    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:42:02.109276    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:42:02.124764    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:42:02.124775    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:42:02.138382    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:42:02.138395    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:42:02.151629    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:42:02.151645    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:42:02.163259    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:42:02.163272    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:42:02.180967    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:42:02.180976    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:42:02.192862    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:42:02.192872    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:42:02.216836    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:42:02.216847    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:42:02.228554    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:42:02.228568    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:42:02.244932    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:02.244943    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:02.270029    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:42:02.270037    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:02.282037    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:02.282051    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:02.314740    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:02.314752    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:02.319146    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:42:02.319152    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:42:04.833195    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:09.835569    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:09.835886    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:09.866674    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:42:09.866814    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:09.885921    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:42:09.886027    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:09.903943    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:42:09.904029    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:09.915118    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:42:09.915188    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:09.925754    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:42:09.925818    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:09.936411    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:42:09.936480    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:09.946143    5088 logs.go:276] 0 containers: []
	W0723 07:42:09.946157    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:09.946225    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:09.956399    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:42:09.956416    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:42:09.956421    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:42:09.972411    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:09.972423    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:10.004734    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:10.004743    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:10.008901    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:42:10.008911    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:42:10.020385    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:42:10.020399    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:42:10.037996    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:42:10.038008    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:10.050987    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:42:10.051002    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:42:10.073199    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:42:10.073216    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:42:10.085226    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:42:10.085238    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:42:10.097406    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:42:10.097417    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:42:10.109881    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:10.109893    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:10.149783    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:42:10.149794    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:42:10.162006    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:42:10.162016    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:42:10.180579    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:42:10.180590    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:42:10.196146    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:10.196158    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:12.723104    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:17.725386    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:17.725539    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:17.741333    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:42:17.741412    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:17.753639    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:42:17.753704    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:17.764349    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:42:17.764419    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:17.775026    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:42:17.775091    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:17.789271    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:42:17.789332    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:17.799978    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:42:17.800048    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:17.810332    5088 logs.go:276] 0 containers: []
	W0723 07:42:17.810349    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:17.810404    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:17.820783    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:42:17.820805    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:42:17.820810    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:42:17.840844    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:42:17.840857    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:42:17.883259    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:42:17.883270    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:42:17.900738    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:42:17.900749    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:17.913080    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:17.913091    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:17.917203    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:42:17.917210    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:42:17.928794    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:42:17.928804    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:42:17.940442    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:42:17.940456    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:42:17.955693    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:42:17.955707    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:42:17.967917    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:42:17.967929    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:42:17.987632    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:42:17.987644    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:42:17.999386    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:17.999398    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:18.023764    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:18.023772    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:18.055664    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:18.055671    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:18.093010    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:42:18.093024    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:42:20.609823    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:25.611954    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:25.612196    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:25.638799    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:42:25.638902    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:25.654819    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:42:25.654893    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:25.668871    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:42:25.668955    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:25.679669    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:42:25.679741    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:25.690110    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:42:25.690187    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:25.701745    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:42:25.701819    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:25.712170    5088 logs.go:276] 0 containers: []
	W0723 07:42:25.712183    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:25.712245    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:25.723166    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:42:25.723183    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:42:25.723188    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:42:25.739196    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:42:25.739210    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:42:25.753335    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:42:25.753344    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:42:25.765731    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:42:25.765746    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:42:25.781387    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:42:25.781398    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:42:25.793806    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:42:25.793823    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:42:25.813115    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:42:25.813129    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:42:25.831344    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:25.831354    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:25.835754    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:25.835762    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:25.869064    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:42:25.869076    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:42:25.881131    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:25.881141    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:25.912073    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:42:25.912081    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:42:25.923873    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:42:25.923887    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:42:25.935497    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:25.935507    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:25.960719    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:42:25.960730    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:28.475952    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:33.478277    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:33.478576    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0723 07:42:33.520495    5088 logs.go:276] 1 containers: [993a57f8fc32]
	I0723 07:42:33.520637    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0723 07:42:33.541818    5088 logs.go:276] 1 containers: [b1a147e6f741]
	I0723 07:42:33.541917    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0723 07:42:33.557537    5088 logs.go:276] 4 containers: [6ca6e3f2e33c 1820cc0cdcd0 677323d7575f ca7761b8cbf2]
	I0723 07:42:33.557629    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0723 07:42:33.570094    5088 logs.go:276] 1 containers: [65694a1ef4e9]
	I0723 07:42:33.570169    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0723 07:42:33.581298    5088 logs.go:276] 1 containers: [a5991eb65f29]
	I0723 07:42:33.581367    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0723 07:42:33.592348    5088 logs.go:276] 1 containers: [041fbcfd1850]
	I0723 07:42:33.592424    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0723 07:42:33.603181    5088 logs.go:276] 0 containers: []
	W0723 07:42:33.603192    5088 logs.go:278] No container was found matching "kindnet"
	I0723 07:42:33.603252    5088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0723 07:42:33.614184    5088 logs.go:276] 1 containers: [aa8e496b6996]
	I0723 07:42:33.614204    5088 logs.go:123] Gathering logs for kubelet ...
	I0723 07:42:33.614210    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 07:42:33.647669    5088 logs.go:123] Gathering logs for storage-provisioner [aa8e496b6996] ...
	I0723 07:42:33.647678    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8e496b6996"
	I0723 07:42:33.661979    5088 logs.go:123] Gathering logs for Docker ...
	I0723 07:42:33.661990    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0723 07:42:33.687121    5088 logs.go:123] Gathering logs for kube-apiserver [993a57f8fc32] ...
	I0723 07:42:33.687129    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 993a57f8fc32"
	I0723 07:42:33.701694    5088 logs.go:123] Gathering logs for etcd [b1a147e6f741] ...
	I0723 07:42:33.701704    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a147e6f741"
	I0723 07:42:33.722706    5088 logs.go:123] Gathering logs for coredns [6ca6e3f2e33c] ...
	I0723 07:42:33.722717    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ca6e3f2e33c"
	I0723 07:42:33.734550    5088 logs.go:123] Gathering logs for kube-scheduler [65694a1ef4e9] ...
	I0723 07:42:33.734562    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65694a1ef4e9"
	I0723 07:42:33.750040    5088 logs.go:123] Gathering logs for kube-proxy [a5991eb65f29] ...
	I0723 07:42:33.750050    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5991eb65f29"
	I0723 07:42:33.761260    5088 logs.go:123] Gathering logs for dmesg ...
	I0723 07:42:33.761270    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 07:42:33.765508    5088 logs.go:123] Gathering logs for coredns [1820cc0cdcd0] ...
	I0723 07:42:33.765515    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820cc0cdcd0"
	I0723 07:42:33.776687    5088 logs.go:123] Gathering logs for describe nodes ...
	I0723 07:42:33.776698    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 07:42:33.815277    5088 logs.go:123] Gathering logs for coredns [677323d7575f] ...
	I0723 07:42:33.815289    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 677323d7575f"
	I0723 07:42:33.827021    5088 logs.go:123] Gathering logs for coredns [ca7761b8cbf2] ...
	I0723 07:42:33.827032    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca7761b8cbf2"
	I0723 07:42:33.841344    5088 logs.go:123] Gathering logs for kube-controller-manager [041fbcfd1850] ...
	I0723 07:42:33.841354    5088 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 041fbcfd1850"
	I0723 07:42:33.859196    5088 logs.go:123] Gathering logs for container status ...
	I0723 07:42:33.859205    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 07:42:36.372881    5088 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0723 07:42:41.375238    5088 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0723 07:42:41.379924    5088 out.go:177] 
	W0723 07:42:41.383849    5088 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0723 07:42:41.383859    5088 out.go:239] * 
	* 
	W0723 07:42:41.384508    5088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:42:41.395788    5088 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (584.66s)

                                                
                                    
x
+
TestPause/serial/Start (10.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-313000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-313000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.208354834s)

                                                
                                                
-- stdout --
	* [pause-313000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-313000" primary control-plane node in "pause-313000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-313000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-313000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-313000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-313000 -n pause-313000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-313000 -n pause-313000: exit status 7 (45.742875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-313000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-361000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-361000 --driver=qemu2 : exit status 80 (9.976500292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-361000" primary control-plane node in "NoKubernetes-361000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-361000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-361000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-361000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-361000 -n NoKubernetes-361000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-361000 -n NoKubernetes-361000: exit status 7 (50.605959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-361000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-361000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-361000 --no-kubernetes --driver=qemu2 : exit status 80 (7.699678542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-361000
	* Restarting existing qemu2 VM for "NoKubernetes-361000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-361000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-361000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-361000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-361000 -n NoKubernetes-361000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-361000 -n NoKubernetes-361000: exit status 7 (48.201333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-361000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-361000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-361000 --no-kubernetes --driver=qemu2 : exit status 80 (7.619225834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-361000
	* Restarting existing qemu2 VM for "NoKubernetes-361000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-361000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-361000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-361000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-361000 -n NoKubernetes-361000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-361000 -n NoKubernetes-361000: exit status 7 (56.183292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-361000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.68s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.3s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19319
- KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2377598543/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.30s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.93s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19319
- KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2300636573/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-361000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-361000 --driver=qemu2 : exit status 80 (5.2877615s)

                                                
                                                
-- stdout --
	* [NoKubernetes-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-361000
	* Restarting existing qemu2 VM for "NoKubernetes-361000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-361000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-361000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-361000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-361000 -n NoKubernetes-361000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-361000 -n NoKubernetes-361000: exit status 7 (69.31575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-361000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.974566875s)

                                                
                                                
-- stdout --
	* [auto-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-703000" primary control-plane node in "auto-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:44:31.551766    5550 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:44:31.551904    5550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:44:31.551907    5550 out.go:304] Setting ErrFile to fd 2...
	I0723 07:44:31.551910    5550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:44:31.552040    5550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:44:31.553076    5550 out.go:298] Setting JSON to false
	I0723 07:44:31.569178    5550 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4435,"bootTime":1721741436,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:44:31.569250    5550 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:44:31.574891    5550 out.go:177] * [auto-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:44:31.579694    5550 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:44:31.579739    5550 notify.go:220] Checking for updates...
	I0723 07:44:31.586806    5550 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:44:31.589712    5550 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:44:31.592685    5550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:44:31.595712    5550 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:44:31.598623    5550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:44:31.602103    5550 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:44:31.602168    5550 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:44:31.602224    5550 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:44:31.606683    5550 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:44:31.613698    5550 start.go:297] selected driver: qemu2
	I0723 07:44:31.613702    5550 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:44:31.613709    5550 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:44:31.616107    5550 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:44:31.619720    5550 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:44:31.622689    5550 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:44:31.622704    5550 cni.go:84] Creating CNI manager for ""
	I0723 07:44:31.622711    5550 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:44:31.622718    5550 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:44:31.622745    5550 start.go:340] cluster config:
	{Name:auto-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:44:31.626645    5550 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:44:31.633715    5550 out.go:177] * Starting "auto-703000" primary control-plane node in "auto-703000" cluster
	I0723 07:44:31.637700    5550 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:44:31.637741    5550 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:44:31.637753    5550 cache.go:56] Caching tarball of preloaded images
	I0723 07:44:31.637827    5550 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:44:31.637833    5550 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:44:31.637896    5550 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/auto-703000/config.json ...
	I0723 07:44:31.637911    5550 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/auto-703000/config.json: {Name:mk367c56faf468cd27a2c7000633d3ef8699f1e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:44:31.638309    5550 start.go:360] acquireMachinesLock for auto-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:44:31.638347    5550 start.go:364] duration metric: took 31.042µs to acquireMachinesLock for "auto-703000"
	I0723 07:44:31.638361    5550 start.go:93] Provisioning new machine with config: &{Name:auto-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:44:31.638400    5550 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:44:31.645599    5550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:44:31.663109    5550 start.go:159] libmachine.API.Create for "auto-703000" (driver="qemu2")
	I0723 07:44:31.663134    5550 client.go:168] LocalClient.Create starting
	I0723 07:44:31.663195    5550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:44:31.663228    5550 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:31.663240    5550 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:31.663283    5550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:44:31.663306    5550 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:31.663319    5550 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:31.663677    5550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:44:31.821264    5550 main.go:141] libmachine: Creating SSH key...
	I0723 07:44:31.898637    5550 main.go:141] libmachine: Creating Disk image...
	I0723 07:44:31.898642    5550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:44:31.898841    5550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2
	I0723 07:44:31.907908    5550 main.go:141] libmachine: STDOUT: 
	I0723 07:44:31.907926    5550 main.go:141] libmachine: STDERR: 
	I0723 07:44:31.907969    5550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2 +20000M
	I0723 07:44:31.915801    5550 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:44:31.915814    5550 main.go:141] libmachine: STDERR: 
	I0723 07:44:31.915832    5550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2
	I0723 07:44:31.915837    5550 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:44:31.915857    5550 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:44:31.915882    5550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d3:bb:a1:02:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2
	I0723 07:44:31.917481    5550 main.go:141] libmachine: STDOUT: 
	I0723 07:44:31.917497    5550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:44:31.917515    5550 client.go:171] duration metric: took 254.382292ms to LocalClient.Create
	I0723 07:44:33.919725    5550 start.go:128] duration metric: took 2.281343084s to createHost
	I0723 07:44:33.919776    5550 start.go:83] releasing machines lock for "auto-703000", held for 2.281464458s
	W0723 07:44:33.919838    5550 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:33.931175    5550 out.go:177] * Deleting "auto-703000" in qemu2 ...
	W0723 07:44:33.961143    5550 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:33.961176    5550 start.go:729] Will try again in 5 seconds ...
	I0723 07:44:38.963279    5550 start.go:360] acquireMachinesLock for auto-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:44:38.963745    5550 start.go:364] duration metric: took 378.834µs to acquireMachinesLock for "auto-703000"
	I0723 07:44:38.963898    5550 start.go:93] Provisioning new machine with config: &{Name:auto-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:44:38.964174    5550 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:44:38.978770    5550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:44:39.029620    5550 start.go:159] libmachine.API.Create for "auto-703000" (driver="qemu2")
	I0723 07:44:39.029675    5550 client.go:168] LocalClient.Create starting
	I0723 07:44:39.029800    5550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:44:39.029867    5550 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:39.029888    5550 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:39.029952    5550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:44:39.029998    5550 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:39.030013    5550 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:39.030502    5550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:44:39.195703    5550 main.go:141] libmachine: Creating SSH key...
	I0723 07:44:39.429123    5550 main.go:141] libmachine: Creating Disk image...
	I0723 07:44:39.429133    5550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:44:39.429366    5550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2
	I0723 07:44:39.439155    5550 main.go:141] libmachine: STDOUT: 
	I0723 07:44:39.439175    5550 main.go:141] libmachine: STDERR: 
	I0723 07:44:39.439217    5550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2 +20000M
	I0723 07:44:39.447283    5550 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:44:39.447297    5550 main.go:141] libmachine: STDERR: 
	I0723 07:44:39.447306    5550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2
	I0723 07:44:39.447311    5550 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:44:39.447319    5550 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:44:39.447349    5550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:b4:5b:31:bb:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/auto-703000/disk.qcow2
	I0723 07:44:39.448963    5550 main.go:141] libmachine: STDOUT: 
	I0723 07:44:39.448977    5550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:44:39.448990    5550 client.go:171] duration metric: took 419.319042ms to LocalClient.Create
	I0723 07:44:41.451129    5550 start.go:128] duration metric: took 2.486971083s to createHost
	I0723 07:44:41.451186    5550 start.go:83] releasing machines lock for "auto-703000", held for 2.487467208s
	W0723 07:44:41.451574    5550 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:41.468503    5550 out.go:177] 
	W0723 07:44:41.473464    5550 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:44:41.473497    5550 out.go:239] * 
	* 
	W0723 07:44:41.476082    5550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:44:41.484247    5550 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0723 07:44:46.531149    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.863096542s)

                                                
                                                
-- stdout --
	* [kindnet-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-703000" primary control-plane node in "kindnet-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:44:43.621556    5659 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:44:43.621692    5659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:44:43.621695    5659 out.go:304] Setting ErrFile to fd 2...
	I0723 07:44:43.621697    5659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:44:43.621815    5659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:44:43.622874    5659 out.go:298] Setting JSON to false
	I0723 07:44:43.638822    5659 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4447,"bootTime":1721741436,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:44:43.638895    5659 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:44:43.644916    5659 out.go:177] * [kindnet-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:44:43.652939    5659 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:44:43.652994    5659 notify.go:220] Checking for updates...
	I0723 07:44:43.659964    5659 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:44:43.662974    5659 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:44:43.665913    5659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:44:43.668895    5659 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:44:43.672026    5659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:44:43.675240    5659 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:44:43.675315    5659 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:44:43.675378    5659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:44:43.679947    5659 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:44:43.686873    5659 start.go:297] selected driver: qemu2
	I0723 07:44:43.686880    5659 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:44:43.686886    5659 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:44:43.689237    5659 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:44:43.691980    5659 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:44:43.694982    5659 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:44:43.694997    5659 cni.go:84] Creating CNI manager for "kindnet"
	I0723 07:44:43.695006    5659 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 07:44:43.695033    5659 start.go:340] cluster config:
	{Name:kindnet-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:44:43.698839    5659 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:44:43.706919    5659 out.go:177] * Starting "kindnet-703000" primary control-plane node in "kindnet-703000" cluster
	I0723 07:44:43.710943    5659 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:44:43.710960    5659 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:44:43.710972    5659 cache.go:56] Caching tarball of preloaded images
	I0723 07:44:43.711054    5659 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:44:43.711060    5659 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:44:43.711117    5659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/kindnet-703000/config.json ...
	I0723 07:44:43.711130    5659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/kindnet-703000/config.json: {Name:mk643a6c4f91c5ff81781fb60c9cfd955d63d44c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:44:43.711369    5659 start.go:360] acquireMachinesLock for kindnet-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:44:43.711405    5659 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "kindnet-703000"
	I0723 07:44:43.711416    5659 start.go:93] Provisioning new machine with config: &{Name:kindnet-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:44:43.711449    5659 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:44:43.719925    5659 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:44:43.738006    5659 start.go:159] libmachine.API.Create for "kindnet-703000" (driver="qemu2")
	I0723 07:44:43.738034    5659 client.go:168] LocalClient.Create starting
	I0723 07:44:43.738098    5659 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:44:43.738128    5659 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:43.738142    5659 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:43.738177    5659 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:44:43.738203    5659 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:43.738213    5659 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:43.738573    5659 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:44:43.895180    5659 main.go:141] libmachine: Creating SSH key...
	I0723 07:44:44.004436    5659 main.go:141] libmachine: Creating Disk image...
	I0723 07:44:44.004442    5659 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:44:44.004608    5659 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2
	I0723 07:44:44.013755    5659 main.go:141] libmachine: STDOUT: 
	I0723 07:44:44.013786    5659 main.go:141] libmachine: STDERR: 
	I0723 07:44:44.013835    5659 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2 +20000M
	I0723 07:44:44.021608    5659 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:44:44.021632    5659 main.go:141] libmachine: STDERR: 
	I0723 07:44:44.021681    5659 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2
	I0723 07:44:44.021686    5659 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:44:44.021703    5659 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:44:44.021726    5659 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:1c:fe:25:e0:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2
	I0723 07:44:44.023352    5659 main.go:141] libmachine: STDOUT: 
	I0723 07:44:44.023374    5659 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:44:44.023392    5659 client.go:171] duration metric: took 285.358833ms to LocalClient.Create
	I0723 07:44:46.025556    5659 start.go:128] duration metric: took 2.314132917s to createHost
	I0723 07:44:46.025621    5659 start.go:83] releasing machines lock for "kindnet-703000", held for 2.314252166s
	W0723 07:44:46.025671    5659 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:46.035562    5659 out.go:177] * Deleting "kindnet-703000" in qemu2 ...
	W0723 07:44:46.069647    5659 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:46.069672    5659 start.go:729] Will try again in 5 seconds ...
	I0723 07:44:51.071752    5659 start.go:360] acquireMachinesLock for kindnet-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:44:51.072315    5659 start.go:364] duration metric: took 446.166µs to acquireMachinesLock for "kindnet-703000"
	I0723 07:44:51.072465    5659 start.go:93] Provisioning new machine with config: &{Name:kindnet-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:44:51.072743    5659 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:44:51.081407    5659 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:44:51.133284    5659 start.go:159] libmachine.API.Create for "kindnet-703000" (driver="qemu2")
	I0723 07:44:51.133328    5659 client.go:168] LocalClient.Create starting
	I0723 07:44:51.133442    5659 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:44:51.133510    5659 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:51.133529    5659 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:51.133603    5659 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:44:51.133648    5659 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:51.133666    5659 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:51.134180    5659 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:44:51.300721    5659 main.go:141] libmachine: Creating SSH key...
	I0723 07:44:51.390378    5659 main.go:141] libmachine: Creating Disk image...
	I0723 07:44:51.390388    5659 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:44:51.390560    5659 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2
	I0723 07:44:51.399739    5659 main.go:141] libmachine: STDOUT: 
	I0723 07:44:51.399779    5659 main.go:141] libmachine: STDERR: 
	I0723 07:44:51.399826    5659 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2 +20000M
	I0723 07:44:51.407572    5659 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:44:51.407588    5659 main.go:141] libmachine: STDERR: 
	I0723 07:44:51.407599    5659 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2
	I0723 07:44:51.407603    5659 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:44:51.407620    5659 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:44:51.407640    5659 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:9c:65:31:ea:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kindnet-703000/disk.qcow2
	I0723 07:44:51.409247    5659 main.go:141] libmachine: STDOUT: 
	I0723 07:44:51.409261    5659 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:44:51.409273    5659 client.go:171] duration metric: took 275.94475ms to LocalClient.Create
	I0723 07:44:53.411409    5659 start.go:128] duration metric: took 2.338682s to createHost
	I0723 07:44:53.411484    5659 start.go:83] releasing machines lock for "kindnet-703000", held for 2.3391915s
	W0723 07:44:53.411790    5659 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:53.424534    5659 out.go:177] 
	W0723 07:44:53.428481    5659 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:44:53.428505    5659 out.go:239] * 
	* 
	W0723 07:44:53.431731    5659 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:44:53.443362    5659 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.858777084s)

                                                
                                                
-- stdout --
	* [flannel-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-703000" primary control-plane node in "flannel-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:44:55.691154    5777 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:44:55.691285    5777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:44:55.691290    5777 out.go:304] Setting ErrFile to fd 2...
	I0723 07:44:55.691292    5777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:44:55.691457    5777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:44:55.692754    5777 out.go:298] Setting JSON to false
	I0723 07:44:55.710722    5777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4459,"bootTime":1721741436,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:44:55.710804    5777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:44:55.715733    5777 out.go:177] * [flannel-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:44:55.723722    5777 notify.go:220] Checking for updates...
	I0723 07:44:55.727687    5777 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:44:55.735730    5777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:44:55.743653    5777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:44:55.751684    5777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:44:55.755680    5777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:44:55.762657    5777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:44:55.766966    5777 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:44:55.767038    5777 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:44:55.767083    5777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:44:55.770621    5777 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:44:55.776627    5777 start.go:297] selected driver: qemu2
	I0723 07:44:55.776635    5777 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:44:55.776641    5777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:44:55.778960    5777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:44:55.783679    5777 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:44:55.786706    5777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:44:55.786747    5777 cni.go:84] Creating CNI manager for "flannel"
	I0723 07:44:55.786751    5777 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0723 07:44:55.786784    5777 start.go:340] cluster config:
	{Name:flannel-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:44:55.790437    5777 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:44:55.794744    5777 out.go:177] * Starting "flannel-703000" primary control-plane node in "flannel-703000" cluster
	I0723 07:44:55.801660    5777 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:44:55.801684    5777 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:44:55.801692    5777 cache.go:56] Caching tarball of preloaded images
	I0723 07:44:55.801785    5777 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:44:55.801790    5777 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:44:55.801851    5777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/flannel-703000/config.json ...
	I0723 07:44:55.801861    5777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/flannel-703000/config.json: {Name:mk50371901482b21ade3a3c863722f121cba47bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:44:55.802439    5777 start.go:360] acquireMachinesLock for flannel-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:44:55.802470    5777 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "flannel-703000"
	I0723 07:44:55.802490    5777 start.go:93] Provisioning new machine with config: &{Name:flannel-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:44:55.802522    5777 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:44:55.811682    5777 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:44:55.827799    5777 start.go:159] libmachine.API.Create for "flannel-703000" (driver="qemu2")
	I0723 07:44:55.827837    5777 client.go:168] LocalClient.Create starting
	I0723 07:44:55.827926    5777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:44:55.827962    5777 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:55.827971    5777 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:55.828010    5777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:44:55.828033    5777 main.go:141] libmachine: Decoding PEM data...
	I0723 07:44:55.828045    5777 main.go:141] libmachine: Parsing certificate...
	I0723 07:44:55.828392    5777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:44:55.984348    5777 main.go:141] libmachine: Creating SSH key...
	I0723 07:44:56.099431    5777 main.go:141] libmachine: Creating Disk image...
	I0723 07:44:56.099437    5777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:44:56.099621    5777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2
	I0723 07:44:56.108800    5777 main.go:141] libmachine: STDOUT: 
	I0723 07:44:56.108822    5777 main.go:141] libmachine: STDERR: 
	I0723 07:44:56.108869    5777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2 +20000M
	I0723 07:44:56.116844    5777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:44:56.116859    5777 main.go:141] libmachine: STDERR: 
	I0723 07:44:56.116873    5777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2
	I0723 07:44:56.116877    5777 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:44:56.116890    5777 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:44:56.116923    5777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:a1:27:24:fb:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2
	I0723 07:44:56.118612    5777 main.go:141] libmachine: STDOUT: 
	I0723 07:44:56.118627    5777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:44:56.118649    5777 client.go:171] duration metric: took 290.813959ms to LocalClient.Create
	I0723 07:44:58.120790    5777 start.go:128] duration metric: took 2.318291375s to createHost
	I0723 07:44:58.120871    5777 start.go:83] releasing machines lock for "flannel-703000", held for 2.318434625s
	W0723 07:44:58.120932    5777 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:58.132267    5777 out.go:177] * Deleting "flannel-703000" in qemu2 ...
	W0723 07:44:58.164351    5777 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:44:58.164377    5777 start.go:729] Will try again in 5 seconds ...
	I0723 07:45:03.166591    5777 start.go:360] acquireMachinesLock for flannel-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:03.167150    5777 start.go:364] duration metric: took 444.583µs to acquireMachinesLock for "flannel-703000"
	I0723 07:45:03.167285    5777 start.go:93] Provisioning new machine with config: &{Name:flannel-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:03.167637    5777 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:03.184186    5777 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:03.234621    5777 start.go:159] libmachine.API.Create for "flannel-703000" (driver="qemu2")
	I0723 07:45:03.234661    5777 client.go:168] LocalClient.Create starting
	I0723 07:45:03.234777    5777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:03.234846    5777 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:03.234865    5777 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:03.234924    5777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:03.234969    5777 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:03.234984    5777 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:03.235509    5777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:03.400650    5777 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:03.457942    5777 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:03.457947    5777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:03.458118    5777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2
	I0723 07:45:03.467440    5777 main.go:141] libmachine: STDOUT: 
	I0723 07:45:03.467459    5777 main.go:141] libmachine: STDERR: 
	I0723 07:45:03.467512    5777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2 +20000M
	I0723 07:45:03.475326    5777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:03.475343    5777 main.go:141] libmachine: STDERR: 
	I0723 07:45:03.475354    5777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2
	I0723 07:45:03.475359    5777 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:03.475367    5777 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:03.475395    5777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:cf:66:12:2a:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/flannel-703000/disk.qcow2
	I0723 07:45:03.477024    5777 main.go:141] libmachine: STDOUT: 
	I0723 07:45:03.477037    5777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:03.477050    5777 client.go:171] duration metric: took 242.390417ms to LocalClient.Create
	I0723 07:45:05.479181    5777 start.go:128] duration metric: took 2.31156475s to createHost
	I0723 07:45:05.479249    5777 start.go:83] releasing machines lock for "flannel-703000", held for 2.312119125s
	W0723 07:45:05.479628    5777 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:05.488191    5777 out.go:177] 
	W0723 07:45:05.493220    5777 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:45:05.493255    5777 out.go:239] * 
	* 
	W0723 07:45:05.495641    5777 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:45:05.504136    5777 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.89528375s)

                                                
                                                
-- stdout --
	* [enable-default-cni-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-703000" primary control-plane node in "enable-default-cni-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:45:07.879454    5895 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:45:07.879561    5895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:07.879564    5895 out.go:304] Setting ErrFile to fd 2...
	I0723 07:45:07.879566    5895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:07.879716    5895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:45:07.880817    5895 out.go:298] Setting JSON to false
	I0723 07:45:07.896652    5895 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4471,"bootTime":1721741436,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:45:07.896713    5895 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:45:07.903201    5895 out.go:177] * [enable-default-cni-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:45:07.910195    5895 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:45:07.910250    5895 notify.go:220] Checking for updates...
	I0723 07:45:07.918215    5895 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:45:07.921061    5895 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:45:07.924183    5895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:45:07.927164    5895 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:45:07.928474    5895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:45:07.931522    5895 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:07.931590    5895 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:07.931634    5895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:45:07.936133    5895 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:45:07.941189    5895 start.go:297] selected driver: qemu2
	I0723 07:45:07.941198    5895 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:45:07.941206    5895 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:45:07.943325    5895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:45:07.946201    5895 out.go:177] * Automatically selected the socket_vmnet network
	E0723 07:45:07.949267    5895 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0723 07:45:07.949279    5895 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:45:07.949310    5895 cni.go:84] Creating CNI manager for "bridge"
	I0723 07:45:07.949315    5895 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:45:07.949345    5895 start.go:340] cluster config:
	{Name:enable-default-cni-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:45:07.952899    5895 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:45:07.960266    5895 out.go:177] * Starting "enable-default-cni-703000" primary control-plane node in "enable-default-cni-703000" cluster
	I0723 07:45:07.964084    5895 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:45:07.964103    5895 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:45:07.964116    5895 cache.go:56] Caching tarball of preloaded images
	I0723 07:45:07.964169    5895 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:45:07.964175    5895 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:45:07.964239    5895 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/enable-default-cni-703000/config.json ...
	I0723 07:45:07.964254    5895 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/enable-default-cni-703000/config.json: {Name:mkcda973dd2c275faf7381c2827e125fc0758edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:45:07.964627    5895 start.go:360] acquireMachinesLock for enable-default-cni-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:07.964660    5895 start.go:364] duration metric: took 27µs to acquireMachinesLock for "enable-default-cni-703000"
	I0723 07:45:07.964672    5895 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:07.964697    5895 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:07.971134    5895 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:07.989059    5895 start.go:159] libmachine.API.Create for "enable-default-cni-703000" (driver="qemu2")
	I0723 07:45:07.989093    5895 client.go:168] LocalClient.Create starting
	I0723 07:45:07.989151    5895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:07.989183    5895 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:07.989191    5895 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:07.989232    5895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:07.989263    5895 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:07.989271    5895 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:07.989770    5895 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:08.146334    5895 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:08.280386    5895 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:08.280398    5895 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:08.280599    5895 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2
	I0723 07:45:08.290279    5895 main.go:141] libmachine: STDOUT: 
	I0723 07:45:08.290298    5895 main.go:141] libmachine: STDERR: 
	I0723 07:45:08.290351    5895 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2 +20000M
	I0723 07:45:08.298169    5895 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:08.298189    5895 main.go:141] libmachine: STDERR: 
	I0723 07:45:08.298203    5895 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2
	I0723 07:45:08.298211    5895 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:08.298220    5895 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:08.298250    5895 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e8:aa:e5:8e:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2
	I0723 07:45:08.299968    5895 main.go:141] libmachine: STDOUT: 
	I0723 07:45:08.299982    5895 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:08.300001    5895 client.go:171] duration metric: took 310.909875ms to LocalClient.Create
	I0723 07:45:10.302132    5895 start.go:128] duration metric: took 2.337460792s to createHost
	I0723 07:45:10.302205    5895 start.go:83] releasing machines lock for "enable-default-cni-703000", held for 2.337582792s
	W0723 07:45:10.302249    5895 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:10.312240    5895 out.go:177] * Deleting "enable-default-cni-703000" in qemu2 ...
	W0723 07:45:10.345159    5895 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:10.345192    5895 start.go:729] Will try again in 5 seconds ...
	I0723 07:45:15.347248    5895 start.go:360] acquireMachinesLock for enable-default-cni-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:15.347745    5895 start.go:364] duration metric: took 414µs to acquireMachinesLock for "enable-default-cni-703000"
	I0723 07:45:15.347856    5895 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:15.348174    5895 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:15.365828    5895 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:15.417381    5895 start.go:159] libmachine.API.Create for "enable-default-cni-703000" (driver="qemu2")
	I0723 07:45:15.417433    5895 client.go:168] LocalClient.Create starting
	I0723 07:45:15.417554    5895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:15.417618    5895 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:15.417633    5895 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:15.417697    5895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:15.417748    5895 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:15.417760    5895 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:15.418390    5895 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:15.586409    5895 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:15.669734    5895 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:15.669739    5895 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:15.669922    5895 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2
	I0723 07:45:15.679253    5895 main.go:141] libmachine: STDOUT: 
	I0723 07:45:15.679267    5895 main.go:141] libmachine: STDERR: 
	I0723 07:45:15.679331    5895 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2 +20000M
	I0723 07:45:15.687097    5895 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:15.687114    5895 main.go:141] libmachine: STDERR: 
	I0723 07:45:15.687128    5895 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2
	I0723 07:45:15.687138    5895 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:15.687146    5895 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:15.687182    5895 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:a2:9c:25:2f:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/enable-default-cni-703000/disk.qcow2
	I0723 07:45:15.688846    5895 main.go:141] libmachine: STDOUT: 
	I0723 07:45:15.688861    5895 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:15.688873    5895 client.go:171] duration metric: took 271.440375ms to LocalClient.Create
	I0723 07:45:17.691007    5895 start.go:128] duration metric: took 2.342855375s to createHost
	I0723 07:45:17.691130    5895 start.go:83] releasing machines lock for "enable-default-cni-703000", held for 2.343361042s
	W0723 07:45:17.691517    5895 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:17.707140    5895 out.go:177] 
	W0723 07:45:17.711055    5895 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:45:17.711089    5895 out.go:239] * 
	* 
	W0723 07:45:17.713683    5895 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:45:17.721121    5895 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.825876708s)

                                                
                                                
-- stdout --
	* [bridge-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-703000" primary control-plane node in "bridge-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:45:19.936737    6006 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:45:19.936949    6006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:19.936952    6006 out.go:304] Setting ErrFile to fd 2...
	I0723 07:45:19.936954    6006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:19.937080    6006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:45:19.938077    6006 out.go:298] Setting JSON to false
	I0723 07:45:19.954180    6006 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4483,"bootTime":1721741436,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:45:19.954248    6006 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:45:19.959676    6006 out.go:177] * [bridge-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:45:19.967540    6006 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:45:19.967581    6006 notify.go:220] Checking for updates...
	I0723 07:45:19.973637    6006 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:45:19.975018    6006 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:45:19.977619    6006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:45:19.980665    6006 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:45:19.983658    6006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:45:19.986937    6006 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:19.987002    6006 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:19.987052    6006 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:45:19.991589    6006 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:45:19.998622    6006 start.go:297] selected driver: qemu2
	I0723 07:45:19.998629    6006 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:45:19.998635    6006 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:45:20.000958    6006 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:45:20.003640    6006 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:45:20.006764    6006 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:45:20.006806    6006 cni.go:84] Creating CNI manager for "bridge"
	I0723 07:45:20.006811    6006 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:45:20.006845    6006 start.go:340] cluster config:
	{Name:bridge-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:45:20.010571    6006 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:45:20.017668    6006 out.go:177] * Starting "bridge-703000" primary control-plane node in "bridge-703000" cluster
	I0723 07:45:20.020631    6006 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:45:20.020647    6006 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:45:20.020658    6006 cache.go:56] Caching tarball of preloaded images
	I0723 07:45:20.020736    6006 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:45:20.020742    6006 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:45:20.020818    6006 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/bridge-703000/config.json ...
	I0723 07:45:20.020830    6006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/bridge-703000/config.json: {Name:mk35379b78e8b0d57ecb96a3b7893f76d9a5b87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:45:20.021051    6006 start.go:360] acquireMachinesLock for bridge-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:20.021084    6006 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "bridge-703000"
	I0723 07:45:20.021097    6006 start.go:93] Provisioning new machine with config: &{Name:bridge-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:20.021124    6006 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:20.028455    6006 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:20.046040    6006 start.go:159] libmachine.API.Create for "bridge-703000" (driver="qemu2")
	I0723 07:45:20.046098    6006 client.go:168] LocalClient.Create starting
	I0723 07:45:20.046172    6006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:20.046201    6006 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:20.046211    6006 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:20.046249    6006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:20.046272    6006 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:20.046282    6006 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:20.046641    6006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:20.203015    6006 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:20.253719    6006 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:20.253724    6006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:20.253879    6006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2
	I0723 07:45:20.262892    6006 main.go:141] libmachine: STDOUT: 
	I0723 07:45:20.262907    6006 main.go:141] libmachine: STDERR: 
	I0723 07:45:20.262968    6006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2 +20000M
	I0723 07:45:20.270757    6006 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:20.270781    6006 main.go:141] libmachine: STDERR: 
	I0723 07:45:20.270793    6006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2
	I0723 07:45:20.270798    6006 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:20.270809    6006 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:20.270834    6006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:ef:b6:92:97:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2
	I0723 07:45:20.272445    6006 main.go:141] libmachine: STDOUT: 
	I0723 07:45:20.272459    6006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:20.272477    6006 client.go:171] duration metric: took 226.379333ms to LocalClient.Create
	I0723 07:45:22.274616    6006 start.go:128] duration metric: took 2.253515666s to createHost
	I0723 07:45:22.274674    6006 start.go:83] releasing machines lock for "bridge-703000", held for 2.253626083s
	W0723 07:45:22.274771    6006 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:22.290042    6006 out.go:177] * Deleting "bridge-703000" in qemu2 ...
	W0723 07:45:22.318598    6006 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:22.318623    6006 start.go:729] Will try again in 5 seconds ...
	I0723 07:45:27.320691    6006 start.go:360] acquireMachinesLock for bridge-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:27.321165    6006 start.go:364] duration metric: took 387.167µs to acquireMachinesLock for "bridge-703000"
	I0723 07:45:27.321314    6006 start.go:93] Provisioning new machine with config: &{Name:bridge-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:27.321702    6006 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:27.335442    6006 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:27.386062    6006 start.go:159] libmachine.API.Create for "bridge-703000" (driver="qemu2")
	I0723 07:45:27.386111    6006 client.go:168] LocalClient.Create starting
	I0723 07:45:27.386224    6006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:27.386282    6006 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:27.386297    6006 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:27.386372    6006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:27.386418    6006 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:27.386431    6006 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:27.386935    6006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:27.553213    6006 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:27.671510    6006 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:27.671515    6006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:27.671698    6006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2
	I0723 07:45:27.680907    6006 main.go:141] libmachine: STDOUT: 
	I0723 07:45:27.680925    6006 main.go:141] libmachine: STDERR: 
	I0723 07:45:27.680975    6006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2 +20000M
	I0723 07:45:27.688725    6006 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:27.688740    6006 main.go:141] libmachine: STDERR: 
	I0723 07:45:27.688753    6006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2
	I0723 07:45:27.688758    6006 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:27.688772    6006 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:27.688803    6006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:94:37:9e:45:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/bridge-703000/disk.qcow2
	I0723 07:45:27.690390    6006 main.go:141] libmachine: STDOUT: 
	I0723 07:45:27.690404    6006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:27.690416    6006 client.go:171] duration metric: took 304.306958ms to LocalClient.Create
	I0723 07:45:29.692554    6006 start.go:128] duration metric: took 2.37087s to createHost
	I0723 07:45:29.692615    6006 start.go:83] releasing machines lock for "bridge-703000", held for 2.371473s
	W0723 07:45:29.693055    6006 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:29.707530    6006 out.go:177] 
	W0723 07:45:29.711784    6006 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:45:29.711828    6006 out.go:239] * 
	* 
	W0723 07:45:29.714170    6006 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:45:29.721686    6006 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.938404917s)

                                                
                                                
-- stdout --
	* [kubenet-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-703000" primary control-plane node in "kubenet-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:45:31.915860    6116 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:45:31.915988    6116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:31.915991    6116 out.go:304] Setting ErrFile to fd 2...
	I0723 07:45:31.915993    6116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:31.916110    6116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:45:31.917151    6116 out.go:298] Setting JSON to false
	I0723 07:45:31.933326    6116 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4495,"bootTime":1721741436,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:45:31.933393    6116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:45:31.939960    6116 out.go:177] * [kubenet-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:45:31.947930    6116 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:45:31.947990    6116 notify.go:220] Checking for updates...
	I0723 07:45:31.953832    6116 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:45:31.956888    6116 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:45:31.959933    6116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:45:31.961293    6116 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:45:31.963875    6116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:45:31.967262    6116 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:31.967328    6116 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:31.967373    6116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:45:31.971657    6116 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:45:31.978879    6116 start.go:297] selected driver: qemu2
	I0723 07:45:31.978889    6116 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:45:31.978896    6116 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:45:31.981320    6116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:45:31.984914    6116 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:45:31.987948    6116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:45:31.987966    6116 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0723 07:45:31.987987    6116 start.go:340] cluster config:
	{Name:kubenet-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:45:31.991697    6116 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:45:31.997848    6116 out.go:177] * Starting "kubenet-703000" primary control-plane node in "kubenet-703000" cluster
	I0723 07:45:32.001884    6116 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:45:32.001901    6116 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:45:32.001913    6116 cache.go:56] Caching tarball of preloaded images
	I0723 07:45:32.001988    6116 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:45:32.001994    6116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:45:32.002076    6116 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/kubenet-703000/config.json ...
	I0723 07:45:32.002093    6116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/kubenet-703000/config.json: {Name:mkc0f17b345514dfb9b060c464f29d17585a0874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:45:32.002333    6116 start.go:360] acquireMachinesLock for kubenet-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:32.002369    6116 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "kubenet-703000"
	I0723 07:45:32.002382    6116 start.go:93] Provisioning new machine with config: &{Name:kubenet-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:32.002414    6116 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:32.009847    6116 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:32.028238    6116 start.go:159] libmachine.API.Create for "kubenet-703000" (driver="qemu2")
	I0723 07:45:32.028267    6116 client.go:168] LocalClient.Create starting
	I0723 07:45:32.028361    6116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:32.028390    6116 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:32.028401    6116 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:32.028440    6116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:32.028463    6116 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:32.028475    6116 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:32.028823    6116 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:32.187588    6116 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:32.266677    6116 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:32.266683    6116 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:32.266854    6116 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2
	I0723 07:45:32.276148    6116 main.go:141] libmachine: STDOUT: 
	I0723 07:45:32.276166    6116 main.go:141] libmachine: STDERR: 
	I0723 07:45:32.276224    6116 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2 +20000M
	I0723 07:45:32.283998    6116 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:32.284011    6116 main.go:141] libmachine: STDERR: 
	I0723 07:45:32.284029    6116 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2
	I0723 07:45:32.284035    6116 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:32.284047    6116 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:32.284074    6116 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ab:b6:9b:b8:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2
	I0723 07:45:32.285614    6116 main.go:141] libmachine: STDOUT: 
	I0723 07:45:32.285631    6116 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:32.285651    6116 client.go:171] duration metric: took 257.3845ms to LocalClient.Create
	I0723 07:45:34.287845    6116 start.go:128] duration metric: took 2.285451791s to createHost
	I0723 07:45:34.287932    6116 start.go:83] releasing machines lock for "kubenet-703000", held for 2.285598083s
	W0723 07:45:34.287997    6116 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:34.304231    6116 out.go:177] * Deleting "kubenet-703000" in qemu2 ...
	W0723 07:45:34.331832    6116 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:34.331861    6116 start.go:729] Will try again in 5 seconds ...
	I0723 07:45:39.333970    6116 start.go:360] acquireMachinesLock for kubenet-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:39.334461    6116 start.go:364] duration metric: took 391.25µs to acquireMachinesLock for "kubenet-703000"
	I0723 07:45:39.334575    6116 start.go:93] Provisioning new machine with config: &{Name:kubenet-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:39.335073    6116 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:39.351708    6116 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:39.402842    6116 start.go:159] libmachine.API.Create for "kubenet-703000" (driver="qemu2")
	I0723 07:45:39.402898    6116 client.go:168] LocalClient.Create starting
	I0723 07:45:39.403035    6116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:39.403096    6116 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:39.403113    6116 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:39.403186    6116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:39.403229    6116 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:39.403239    6116 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:39.403743    6116 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:39.569453    6116 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:39.759584    6116 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:39.759590    6116 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:39.759784    6116 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2
	I0723 07:45:39.769345    6116 main.go:141] libmachine: STDOUT: 
	I0723 07:45:39.769367    6116 main.go:141] libmachine: STDERR: 
	I0723 07:45:39.769424    6116 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2 +20000M
	I0723 07:45:39.777353    6116 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:39.777368    6116 main.go:141] libmachine: STDERR: 
	I0723 07:45:39.777379    6116 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2
	I0723 07:45:39.777385    6116 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:39.777396    6116 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:39.777421    6116 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:e8:58:35:6c:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/kubenet-703000/disk.qcow2
	I0723 07:45:39.778986    6116 main.go:141] libmachine: STDOUT: 
	I0723 07:45:39.779003    6116 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:39.779015    6116 client.go:171] duration metric: took 376.11925ms to LocalClient.Create
	I0723 07:45:41.781150    6116 start.go:128] duration metric: took 2.446077708s to createHost
	I0723 07:45:41.781212    6116 start.go:83] releasing machines lock for "kubenet-703000", held for 2.446773709s
	W0723 07:45:41.781558    6116 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:41.792116    6116 out.go:177] 
	W0723 07:45:41.800214    6116 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:45:41.800239    6116 out.go:239] * 
	* 
	W0723 07:45:41.810369    6116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:45:41.817403    6116 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.756133375s)

                                                
                                                
-- stdout --
	* [custom-flannel-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-703000" primary control-plane node in "custom-flannel-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:45:43.983569    6230 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:45:43.983700    6230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:43.983703    6230 out.go:304] Setting ErrFile to fd 2...
	I0723 07:45:43.983706    6230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:43.983827    6230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:45:43.984831    6230 out.go:298] Setting JSON to false
	I0723 07:45:44.000560    6230 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4507,"bootTime":1721741436,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:45:44.000619    6230 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:45:44.006728    6230 out.go:177] * [custom-flannel-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:45:44.014460    6230 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:45:44.014517    6230 notify.go:220] Checking for updates...
	I0723 07:45:44.022687    6230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:45:44.024107    6230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:45:44.027642    6230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:45:44.030665    6230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:45:44.033697    6230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:45:44.037020    6230 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:44.037092    6230 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:44.037139    6230 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:45:44.041579    6230 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:45:44.048658    6230 start.go:297] selected driver: qemu2
	I0723 07:45:44.048667    6230 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:45:44.048675    6230 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:45:44.050915    6230 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:45:44.053588    6230 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:45:44.056820    6230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:45:44.056851    6230 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0723 07:45:44.056870    6230 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0723 07:45:44.056902    6230 start.go:340] cluster config:
	{Name:custom-flannel-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:45:44.060537    6230 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:45:44.067649    6230 out.go:177] * Starting "custom-flannel-703000" primary control-plane node in "custom-flannel-703000" cluster
	I0723 07:45:44.070608    6230 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:45:44.070622    6230 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:45:44.070634    6230 cache.go:56] Caching tarball of preloaded images
	I0723 07:45:44.070703    6230 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:45:44.070708    6230 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:45:44.070778    6230 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/custom-flannel-703000/config.json ...
	I0723 07:45:44.070791    6230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/custom-flannel-703000/config.json: {Name:mkb3b9e317e5480300ce2a193db4a76bda8c8ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:45:44.071024    6230 start.go:360] acquireMachinesLock for custom-flannel-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:44.071063    6230 start.go:364] duration metric: took 27.709µs to acquireMachinesLock for "custom-flannel-703000"
	I0723 07:45:44.071077    6230 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:44.071103    6230 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:44.077548    6230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:44.094514    6230 start.go:159] libmachine.API.Create for "custom-flannel-703000" (driver="qemu2")
	I0723 07:45:44.094537    6230 client.go:168] LocalClient.Create starting
	I0723 07:45:44.094602    6230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:44.094632    6230 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:44.094642    6230 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:44.094680    6230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:44.094708    6230 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:44.094714    6230 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:44.095079    6230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:44.251721    6230 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:44.300863    6230 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:44.300868    6230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:44.301030    6230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2
	I0723 07:45:44.310110    6230 main.go:141] libmachine: STDOUT: 
	I0723 07:45:44.310123    6230 main.go:141] libmachine: STDERR: 
	I0723 07:45:44.310168    6230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2 +20000M
	I0723 07:45:44.317922    6230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:44.317938    6230 main.go:141] libmachine: STDERR: 
	I0723 07:45:44.317956    6230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2
	I0723 07:45:44.317962    6230 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:44.317974    6230 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:44.318003    6230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:62:1d:cb:98:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2
	I0723 07:45:44.319643    6230 main.go:141] libmachine: STDOUT: 
	I0723 07:45:44.319657    6230 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:44.319683    6230 client.go:171] duration metric: took 225.147041ms to LocalClient.Create
	I0723 07:45:46.321850    6230 start.go:128] duration metric: took 2.250771208s to createHost
	I0723 07:45:46.321915    6230 start.go:83] releasing machines lock for "custom-flannel-703000", held for 2.250888083s
	W0723 07:45:46.322007    6230 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:46.332132    6230 out.go:177] * Deleting "custom-flannel-703000" in qemu2 ...
	W0723 07:45:46.363814    6230 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:46.363870    6230 start.go:729] Will try again in 5 seconds ...
	I0723 07:45:51.366039    6230 start.go:360] acquireMachinesLock for custom-flannel-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:51.366608    6230 start.go:364] duration metric: took 473.666µs to acquireMachinesLock for "custom-flannel-703000"
	I0723 07:45:51.366756    6230 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:51.367062    6230 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:51.376665    6230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:51.427289    6230 start.go:159] libmachine.API.Create for "custom-flannel-703000" (driver="qemu2")
	I0723 07:45:51.427354    6230 client.go:168] LocalClient.Create starting
	I0723 07:45:51.427477    6230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:51.427541    6230 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:51.427559    6230 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:51.427623    6230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:51.427686    6230 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:51.427697    6230 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:51.428255    6230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:51.593588    6230 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:51.644082    6230 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:51.644091    6230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:51.644262    6230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2
	I0723 07:45:51.653527    6230 main.go:141] libmachine: STDOUT: 
	I0723 07:45:51.653543    6230 main.go:141] libmachine: STDERR: 
	I0723 07:45:51.653588    6230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2 +20000M
	I0723 07:45:51.661488    6230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:51.661500    6230 main.go:141] libmachine: STDERR: 
	I0723 07:45:51.661510    6230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2
	I0723 07:45:51.661512    6230 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:51.661524    6230 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:51.661554    6230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:c2:f3:0e:5a:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/custom-flannel-703000/disk.qcow2
	I0723 07:45:51.663097    6230 main.go:141] libmachine: STDOUT: 
	I0723 07:45:51.663110    6230 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:51.663120    6230 client.go:171] duration metric: took 235.76625ms to LocalClient.Create
	I0723 07:45:53.665262    6230 start.go:128] duration metric: took 2.298214375s to createHost
	I0723 07:45:53.665314    6230 start.go:83] releasing machines lock for "custom-flannel-703000", held for 2.298728208s
	W0723 07:45:53.665742    6230 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:53.677254    6230 out.go:177] 
	W0723 07:45:53.681377    6230 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:45:53.681516    6230 out.go:239] * 
	* 
	W0723 07:45:53.684451    6230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:45:53.698307    6230 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.817562375s)

                                                
                                                
-- stdout --
	* [calico-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-703000" primary control-plane node in "calico-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:45:56.085781    6347 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:45:56.085903    6347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:56.085907    6347 out.go:304] Setting ErrFile to fd 2...
	I0723 07:45:56.085910    6347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:45:56.086037    6347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:45:56.087065    6347 out.go:298] Setting JSON to false
	I0723 07:45:56.103299    6347 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4520,"bootTime":1721741436,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:45:56.103489    6347 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:45:56.110674    6347 out.go:177] * [calico-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:45:56.118695    6347 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:45:56.118716    6347 notify.go:220] Checking for updates...
	I0723 07:45:56.125630    6347 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:45:56.128654    6347 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:45:56.131686    6347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:45:56.134631    6347 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:45:56.137575    6347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:45:56.140906    6347 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:56.140974    6347 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:45:56.141026    6347 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:45:56.145584    6347 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:45:56.152653    6347 start.go:297] selected driver: qemu2
	I0723 07:45:56.152662    6347 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:45:56.152670    6347 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:45:56.154953    6347 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:45:56.157625    6347 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:45:56.160763    6347 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:45:56.160789    6347 cni.go:84] Creating CNI manager for "calico"
	I0723 07:45:56.160795    6347 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0723 07:45:56.160825    6347 start.go:340] cluster config:
	{Name:calico-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:45:56.164387    6347 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:45:56.171625    6347 out.go:177] * Starting "calico-703000" primary control-plane node in "calico-703000" cluster
	I0723 07:45:56.175609    6347 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:45:56.175622    6347 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:45:56.175630    6347 cache.go:56] Caching tarball of preloaded images
	I0723 07:45:56.175694    6347 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:45:56.175699    6347 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:45:56.175755    6347 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/calico-703000/config.json ...
	I0723 07:45:56.175767    6347 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/calico-703000/config.json: {Name:mke17dadafb1c44a4037a835e15c554f99a662e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:45:56.175997    6347 start.go:360] acquireMachinesLock for calico-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:45:56.176033    6347 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "calico-703000"
	I0723 07:45:56.176045    6347 start.go:93] Provisioning new machine with config: &{Name:calico-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:45:56.176088    6347 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:45:56.183632    6347 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:45:56.202055    6347 start.go:159] libmachine.API.Create for "calico-703000" (driver="qemu2")
	I0723 07:45:56.202090    6347 client.go:168] LocalClient.Create starting
	I0723 07:45:56.202171    6347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:45:56.202201    6347 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:56.202211    6347 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:56.202252    6347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:45:56.202276    6347 main.go:141] libmachine: Decoding PEM data...
	I0723 07:45:56.202287    6347 main.go:141] libmachine: Parsing certificate...
	I0723 07:45:56.202638    6347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:45:56.360366    6347 main.go:141] libmachine: Creating SSH key...
	I0723 07:45:56.430726    6347 main.go:141] libmachine: Creating Disk image...
	I0723 07:45:56.430731    6347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:45:56.430917    6347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2
	I0723 07:45:56.440478    6347 main.go:141] libmachine: STDOUT: 
	I0723 07:45:56.440499    6347 main.go:141] libmachine: STDERR: 
	I0723 07:45:56.440551    6347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2 +20000M
	I0723 07:45:56.448403    6347 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:45:56.448417    6347 main.go:141] libmachine: STDERR: 
	I0723 07:45:56.448434    6347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2
	I0723 07:45:56.448439    6347 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:45:56.448452    6347 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:45:56.448485    6347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:8b:99:72:39:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2
	I0723 07:45:56.450072    6347 main.go:141] libmachine: STDOUT: 
	I0723 07:45:56.450100    6347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:45:56.450124    6347 client.go:171] duration metric: took 248.034042ms to LocalClient.Create
	I0723 07:45:58.452308    6347 start.go:128] duration metric: took 2.276242834s to createHost
	I0723 07:45:58.452380    6347 start.go:83] releasing machines lock for "calico-703000", held for 2.276379917s
	W0723 07:45:58.452429    6347 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:58.464724    6347 out.go:177] * Deleting "calico-703000" in qemu2 ...
	W0723 07:45:58.496437    6347 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:45:58.496464    6347 start.go:729] Will try again in 5 seconds ...
	I0723 07:46:03.498517    6347 start.go:360] acquireMachinesLock for calico-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:03.498831    6347 start.go:364] duration metric: took 232.416µs to acquireMachinesLock for "calico-703000"
	I0723 07:46:03.498914    6347 start.go:93] Provisioning new machine with config: &{Name:calico-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:46:03.499076    6347 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:46:03.508304    6347 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:46:03.543184    6347 start.go:159] libmachine.API.Create for "calico-703000" (driver="qemu2")
	I0723 07:46:03.543217    6347 client.go:168] LocalClient.Create starting
	I0723 07:46:03.543298    6347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:46:03.543342    6347 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:03.543355    6347 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:03.543400    6347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:46:03.543423    6347 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:03.543431    6347 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:03.543788    6347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:46:03.695535    6347 main.go:141] libmachine: Creating SSH key...
	I0723 07:46:03.809248    6347 main.go:141] libmachine: Creating Disk image...
	I0723 07:46:03.809258    6347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:46:03.809461    6347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2
	I0723 07:46:03.818842    6347 main.go:141] libmachine: STDOUT: 
	I0723 07:46:03.818861    6347 main.go:141] libmachine: STDERR: 
	I0723 07:46:03.818911    6347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2 +20000M
	I0723 07:46:03.826787    6347 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:46:03.826807    6347 main.go:141] libmachine: STDERR: 
	I0723 07:46:03.826818    6347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2
	I0723 07:46:03.826821    6347 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:46:03.826832    6347 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:03.826856    6347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:7a:63:26:d3:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/calico-703000/disk.qcow2
	I0723 07:46:03.828432    6347 main.go:141] libmachine: STDOUT: 
	I0723 07:46:03.828452    6347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:03.828464    6347 client.go:171] duration metric: took 285.247833ms to LocalClient.Create
	I0723 07:46:05.830600    6347 start.go:128] duration metric: took 2.331538083s to createHost
	I0723 07:46:05.830713    6347 start.go:83] releasing machines lock for "calico-703000", held for 2.331903792s
	W0723 07:46:05.831055    6347 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:05.846748    6347 out.go:177] 
	W0723 07:46:05.850760    6347 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:46:05.850810    6347 out.go:239] * 
	* 
	W0723 07:46:05.853557    6347 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:46:05.861678    6347 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-703000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.908705292s)

                                                
                                                
-- stdout --
	* [false-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-703000" primary control-plane node in "false-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:46:08.249600    6467 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:46:08.249729    6467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:08.249733    6467 out.go:304] Setting ErrFile to fd 2...
	I0723 07:46:08.249735    6467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:08.249866    6467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:46:08.250853    6467 out.go:298] Setting JSON to false
	I0723 07:46:08.266576    6467 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4532,"bootTime":1721741436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:46:08.266645    6467 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:46:08.273035    6467 out.go:177] * [false-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:46:08.280984    6467 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:46:08.281037    6467 notify.go:220] Checking for updates...
	I0723 07:46:08.287867    6467 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:46:08.291035    6467 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:46:08.293993    6467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:46:08.296906    6467 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:46:08.299957    6467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:46:08.303366    6467 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:46:08.303439    6467 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:46:08.303487    6467 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:46:08.307911    6467 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:46:08.314975    6467 start.go:297] selected driver: qemu2
	I0723 07:46:08.314981    6467 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:46:08.314987    6467 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:46:08.317243    6467 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:46:08.320934    6467 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:46:08.324001    6467 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:46:08.324034    6467 cni.go:84] Creating CNI manager for "false"
	I0723 07:46:08.324060    6467 start.go:340] cluster config:
	{Name:false-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:46:08.327558    6467 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:08.334939    6467 out.go:177] * Starting "false-703000" primary control-plane node in "false-703000" cluster
	I0723 07:46:08.338936    6467 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:46:08.338949    6467 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:46:08.338958    6467 cache.go:56] Caching tarball of preloaded images
	I0723 07:46:08.339013    6467 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:46:08.339019    6467 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:46:08.339082    6467 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/false-703000/config.json ...
	I0723 07:46:08.339094    6467 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/false-703000/config.json: {Name:mkfa9d149c11e03c272745a06f3a550f5f005a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:46:08.339308    6467 start.go:360] acquireMachinesLock for false-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:08.339342    6467 start.go:364] duration metric: took 27.834µs to acquireMachinesLock for "false-703000"
	I0723 07:46:08.339353    6467 start.go:93] Provisioning new machine with config: &{Name:false-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:46:08.339384    6467 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:46:08.342974    6467 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:46:08.359910    6467 start.go:159] libmachine.API.Create for "false-703000" (driver="qemu2")
	I0723 07:46:08.359940    6467 client.go:168] LocalClient.Create starting
	I0723 07:46:08.360003    6467 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:46:08.360030    6467 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:08.360038    6467 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:08.360079    6467 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:46:08.360105    6467 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:08.360114    6467 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:08.360491    6467 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:46:08.516793    6467 main.go:141] libmachine: Creating SSH key...
	I0723 07:46:08.691378    6467 main.go:141] libmachine: Creating Disk image...
	I0723 07:46:08.691385    6467 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:46:08.691594    6467 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2
	I0723 07:46:08.701226    6467 main.go:141] libmachine: STDOUT: 
	I0723 07:46:08.701247    6467 main.go:141] libmachine: STDERR: 
	I0723 07:46:08.701298    6467 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2 +20000M
	I0723 07:46:08.709405    6467 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:46:08.709419    6467 main.go:141] libmachine: STDERR: 
	I0723 07:46:08.709441    6467 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2
	I0723 07:46:08.709448    6467 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:46:08.709463    6467 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:08.709494    6467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:60:35:f6:40:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2
	I0723 07:46:08.711127    6467 main.go:141] libmachine: STDOUT: 
	I0723 07:46:08.711143    6467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:08.711167    6467 client.go:171] duration metric: took 351.227541ms to LocalClient.Create
	I0723 07:46:10.713299    6467 start.go:128] duration metric: took 2.373940541s to createHost
	I0723 07:46:10.713350    6467 start.go:83] releasing machines lock for "false-703000", held for 2.37404625s
	W0723 07:46:10.713428    6467 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:10.724512    6467 out.go:177] * Deleting "false-703000" in qemu2 ...
	W0723 07:46:10.757044    6467 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:10.757072    6467 start.go:729] Will try again in 5 seconds ...
	I0723 07:46:15.759124    6467 start.go:360] acquireMachinesLock for false-703000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:15.759527    6467 start.go:364] duration metric: took 324.291µs to acquireMachinesLock for "false-703000"
	I0723 07:46:15.759660    6467 start.go:93] Provisioning new machine with config: &{Name:false-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:46:15.759917    6467 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:46:15.771427    6467 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 07:46:15.821275    6467 start.go:159] libmachine.API.Create for "false-703000" (driver="qemu2")
	I0723 07:46:15.821325    6467 client.go:168] LocalClient.Create starting
	I0723 07:46:15.821447    6467 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:46:15.821515    6467 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:15.821531    6467 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:15.821594    6467 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:46:15.821638    6467 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:15.821652    6467 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:15.822153    6467 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:46:15.987663    6467 main.go:141] libmachine: Creating SSH key...
	I0723 07:46:16.066289    6467 main.go:141] libmachine: Creating Disk image...
	I0723 07:46:16.066295    6467 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:46:16.066472    6467 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2
	I0723 07:46:16.075541    6467 main.go:141] libmachine: STDOUT: 
	I0723 07:46:16.075562    6467 main.go:141] libmachine: STDERR: 
	I0723 07:46:16.075617    6467 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2 +20000M
	I0723 07:46:16.083437    6467 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:46:16.083452    6467 main.go:141] libmachine: STDERR: 
	I0723 07:46:16.083461    6467 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2
	I0723 07:46:16.083466    6467 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:46:16.083477    6467 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:16.083510    6467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:93:a4:4c:38:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/false-703000/disk.qcow2
	I0723 07:46:16.085093    6467 main.go:141] libmachine: STDOUT: 
	I0723 07:46:16.085113    6467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:16.085125    6467 client.go:171] duration metric: took 263.801125ms to LocalClient.Create
	I0723 07:46:18.087256    6467 start.go:128] duration metric: took 2.327357167s to createHost
	I0723 07:46:18.087306    6467 start.go:83] releasing machines lock for "false-703000", held for 2.327800833s
	W0723 07:46:18.087720    6467 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:18.101268    6467 out.go:177] 
	W0723 07:46:18.105411    6467 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:46:18.105459    6467 out.go:239] * 
	* 
	W0723 07:46:18.108117    6467 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:46:18.116292    6467 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-665000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
E0723 07:46:21.241346    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-665000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.994329s)

                                                
                                                
-- stdout --
	* [old-k8s-version-665000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-665000" primary control-plane node in "old-k8s-version-665000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-665000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:46:20.284288    6576 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:46:20.284424    6576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:20.284428    6576 out.go:304] Setting ErrFile to fd 2...
	I0723 07:46:20.284430    6576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:20.284549    6576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:46:20.285636    6576 out.go:298] Setting JSON to false
	I0723 07:46:20.301505    6576 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4544,"bootTime":1721741436,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:46:20.301579    6576 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:46:20.306993    6576 out.go:177] * [old-k8s-version-665000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:46:20.313903    6576 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:46:20.313965    6576 notify.go:220] Checking for updates...
	I0723 07:46:20.321890    6576 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:46:20.324901    6576 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:46:20.327886    6576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:46:20.330899    6576 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:46:20.333865    6576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:46:20.337149    6576 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:46:20.337217    6576 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:46:20.337265    6576 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:46:20.339864    6576 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:46:20.351904    6576 start.go:297] selected driver: qemu2
	I0723 07:46:20.351911    6576 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:46:20.351919    6576 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:46:20.354346    6576 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:46:20.356778    6576 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:46:20.359892    6576 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:46:20.359923    6576 cni.go:84] Creating CNI manager for ""
	I0723 07:46:20.359930    6576 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0723 07:46:20.359955    6576 start.go:340] cluster config:
	{Name:old-k8s-version-665000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-665000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:46:20.363660    6576 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:20.370827    6576 out.go:177] * Starting "old-k8s-version-665000" primary control-plane node in "old-k8s-version-665000" cluster
	I0723 07:46:20.374861    6576 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 07:46:20.374879    6576 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0723 07:46:20.374891    6576 cache.go:56] Caching tarball of preloaded images
	I0723 07:46:20.374962    6576 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:46:20.374969    6576 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0723 07:46:20.375057    6576 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/old-k8s-version-665000/config.json ...
	I0723 07:46:20.375074    6576 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/old-k8s-version-665000/config.json: {Name:mk239555b3ae09558a2ea2471320f986ead4edb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:46:20.375423    6576 start.go:360] acquireMachinesLock for old-k8s-version-665000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:20.375459    6576 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "old-k8s-version-665000"
	I0723 07:46:20.375472    6576 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-665000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-665000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:46:20.375508    6576 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:46:20.382909    6576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:46:20.401028    6576 start.go:159] libmachine.API.Create for "old-k8s-version-665000" (driver="qemu2")
	I0723 07:46:20.401065    6576 client.go:168] LocalClient.Create starting
	I0723 07:46:20.401120    6576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:46:20.401154    6576 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:20.401163    6576 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:20.401203    6576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:46:20.401226    6576 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:20.401233    6576 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:20.401664    6576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:46:20.556974    6576 main.go:141] libmachine: Creating SSH key...
	I0723 07:46:20.695317    6576 main.go:141] libmachine: Creating Disk image...
	I0723 07:46:20.695323    6576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:46:20.695524    6576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2
	I0723 07:46:20.704999    6576 main.go:141] libmachine: STDOUT: 
	I0723 07:46:20.705015    6576 main.go:141] libmachine: STDERR: 
	I0723 07:46:20.705072    6576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2 +20000M
	I0723 07:46:20.712825    6576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:46:20.712839    6576 main.go:141] libmachine: STDERR: 
	I0723 07:46:20.712856    6576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2
	I0723 07:46:20.712861    6576 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:46:20.712873    6576 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:20.712895    6576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:99:41:6f:da:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2
	I0723 07:46:20.714491    6576 main.go:141] libmachine: STDOUT: 
	I0723 07:46:20.714506    6576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:20.714523    6576 client.go:171] duration metric: took 313.460083ms to LocalClient.Create
	I0723 07:46:22.716658    6576 start.go:128] duration metric: took 2.3411755s to createHost
	I0723 07:46:22.716714    6576 start.go:83] releasing machines lock for "old-k8s-version-665000", held for 2.341290791s
	W0723 07:46:22.716798    6576 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:22.732900    6576 out.go:177] * Deleting "old-k8s-version-665000" in qemu2 ...
	W0723 07:46:22.760196    6576 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:22.760219    6576 start.go:729] Will try again in 5 seconds ...
	I0723 07:46:27.762363    6576 start.go:360] acquireMachinesLock for old-k8s-version-665000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:27.762953    6576 start.go:364] duration metric: took 380.541µs to acquireMachinesLock for "old-k8s-version-665000"
	I0723 07:46:27.763077    6576 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-665000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-665000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:46:27.763381    6576 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:46:27.771878    6576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:46:27.820342    6576 start.go:159] libmachine.API.Create for "old-k8s-version-665000" (driver="qemu2")
	I0723 07:46:27.820394    6576 client.go:168] LocalClient.Create starting
	I0723 07:46:27.820506    6576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:46:27.820562    6576 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:27.820574    6576 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:27.820628    6576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:46:27.820687    6576 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:27.820700    6576 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:27.821206    6576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:46:27.986458    6576 main.go:141] libmachine: Creating SSH key...
	I0723 07:46:28.186759    6576 main.go:141] libmachine: Creating Disk image...
	I0723 07:46:28.186769    6576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:46:28.186970    6576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2
	I0723 07:46:28.196860    6576 main.go:141] libmachine: STDOUT: 
	I0723 07:46:28.196879    6576 main.go:141] libmachine: STDERR: 
	I0723 07:46:28.196936    6576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2 +20000M
	I0723 07:46:28.204796    6576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:46:28.204810    6576 main.go:141] libmachine: STDERR: 
	I0723 07:46:28.204820    6576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2
	I0723 07:46:28.204827    6576 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:46:28.204835    6576 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:28.204867    6576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:96:ac:08:8f:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2
	I0723 07:46:28.206495    6576 main.go:141] libmachine: STDOUT: 
	I0723 07:46:28.206508    6576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:28.206520    6576 client.go:171] duration metric: took 386.129541ms to LocalClient.Create
	I0723 07:46:30.208652    6576 start.go:128] duration metric: took 2.445288s to createHost
	I0723 07:46:30.208725    6576 start.go:83] releasing machines lock for "old-k8s-version-665000", held for 2.4457735s
	W0723 07:46:30.209160    6576 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-665000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-665000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:30.219715    6576 out.go:177] 
	W0723 07:46:30.226227    6576 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:46:30.226258    6576 out.go:239] * 
	* 
	W0723 07:46:30.227767    6576 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:46:30.237512    6576 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-665000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (63.23225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-665000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-665000 create -f testdata/busybox.yaml: exit status 1 (28.914666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-665000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-665000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (29.327125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (30.113375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-665000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-665000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-665000 describe deploy/metrics-server -n kube-system: exit status 1 (26.475167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-665000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-665000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (28.843625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-665000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-665000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.184488333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-665000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-665000" primary control-plane node in "old-k8s-version-665000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-665000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-665000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:46:34.131466    6624 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:46:34.131598    6624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:34.131601    6624 out.go:304] Setting ErrFile to fd 2...
	I0723 07:46:34.131604    6624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:34.131732    6624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:46:34.132736    6624 out.go:298] Setting JSON to false
	I0723 07:46:34.148878    6624 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4558,"bootTime":1721741436,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:46:34.148941    6624 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:46:34.154039    6624 out.go:177] * [old-k8s-version-665000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:46:34.161028    6624 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:46:34.161098    6624 notify.go:220] Checking for updates...
	I0723 07:46:34.167982    6624 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:46:34.171008    6624 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:46:34.173996    6624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:46:34.176993    6624 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:46:34.179949    6624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:46:34.183218    6624 config.go:182] Loaded profile config "old-k8s-version-665000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0723 07:46:34.184478    6624 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0723 07:46:34.186976    6624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:46:34.190987    6624 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:46:34.195929    6624 start.go:297] selected driver: qemu2
	I0723 07:46:34.195938    6624 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-665000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-665000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:46:34.196002    6624 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:46:34.198206    6624 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:46:34.198229    6624 cni.go:84] Creating CNI manager for ""
	I0723 07:46:34.198236    6624 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0723 07:46:34.198271    6624 start.go:340] cluster config:
	{Name:old-k8s-version-665000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-665000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:46:34.201921    6624 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:34.208968    6624 out.go:177] * Starting "old-k8s-version-665000" primary control-plane node in "old-k8s-version-665000" cluster
	I0723 07:46:34.212965    6624 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 07:46:34.212981    6624 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0723 07:46:34.212993    6624 cache.go:56] Caching tarball of preloaded images
	I0723 07:46:34.213050    6624 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:46:34.213055    6624 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0723 07:46:34.213113    6624 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/old-k8s-version-665000/config.json ...
	I0723 07:46:34.213526    6624 start.go:360] acquireMachinesLock for old-k8s-version-665000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:34.213553    6624 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "old-k8s-version-665000"
	I0723 07:46:34.213562    6624 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:46:34.213567    6624 fix.go:54] fixHost starting: 
	I0723 07:46:34.213680    6624 fix.go:112] recreateIfNeeded on old-k8s-version-665000: state=Stopped err=<nil>
	W0723 07:46:34.213690    6624 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:46:34.216906    6624 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-665000" ...
	I0723 07:46:34.225006    6624 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:34.225050    6624 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:96:ac:08:8f:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2
	I0723 07:46:34.227019    6624 main.go:141] libmachine: STDOUT: 
	I0723 07:46:34.227040    6624 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:34.227071    6624 fix.go:56] duration metric: took 13.503333ms for fixHost
	I0723 07:46:34.227075    6624 start.go:83] releasing machines lock for "old-k8s-version-665000", held for 13.51775ms
	W0723 07:46:34.227081    6624 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:46:34.227116    6624 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:34.227121    6624 start.go:729] Will try again in 5 seconds ...
	I0723 07:46:39.229217    6624 start.go:360] acquireMachinesLock for old-k8s-version-665000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:39.229675    6624 start.go:364] duration metric: took 363.458µs to acquireMachinesLock for "old-k8s-version-665000"
	I0723 07:46:39.229811    6624 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:46:39.229830    6624 fix.go:54] fixHost starting: 
	I0723 07:46:39.230516    6624 fix.go:112] recreateIfNeeded on old-k8s-version-665000: state=Stopped err=<nil>
	W0723 07:46:39.230547    6624 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:46:39.240103    6624 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-665000" ...
	I0723 07:46:39.244171    6624 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:39.244372    6624 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:96:ac:08:8f:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/old-k8s-version-665000/disk.qcow2
	I0723 07:46:39.253062    6624 main.go:141] libmachine: STDOUT: 
	I0723 07:46:39.253113    6624 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:39.253175    6624 fix.go:56] duration metric: took 23.34675ms for fixHost
	I0723 07:46:39.253191    6624 start.go:83] releasing machines lock for "old-k8s-version-665000", held for 23.494ms
	W0723 07:46:39.253356    6624 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-665000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-665000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:39.261150    6624 out.go:177] 
	W0723 07:46:39.265179    6624 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:46:39.265202    6624 out.go:239] * 
	* 
	W0723 07:46:39.267912    6624 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:46:39.275096    6624 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-665000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (67.694417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-665000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (32.219583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-665000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-665000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-665000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.649083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-665000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-665000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (29.921917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-665000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (29.971459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-665000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-665000 --alsologtostderr -v=1: exit status 83 (39.41825ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-665000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-665000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:46:39.546048    6643 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:46:39.546453    6643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:39.546457    6643 out.go:304] Setting ErrFile to fd 2...
	I0723 07:46:39.546459    6643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:39.546638    6643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:46:39.546846    6643 out.go:298] Setting JSON to false
	I0723 07:46:39.546852    6643 mustload.go:65] Loading cluster: old-k8s-version-665000
	I0723 07:46:39.547033    6643 config.go:182] Loaded profile config "old-k8s-version-665000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0723 07:46:39.551145    6643 out.go:177] * The control-plane node old-k8s-version-665000 host is not running: state=Stopped
	I0723 07:46:39.554164    6643 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-665000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-665000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (28.75ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (29.627417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (10.201580417s)

                                                
                                                
-- stdout --
	* [no-preload-918000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-918000" primary control-plane node in "no-preload-918000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-918000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:46:39.858176    6660 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:46:39.858308    6660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:39.858311    6660 out.go:304] Setting ErrFile to fd 2...
	I0723 07:46:39.858314    6660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:39.858446    6660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:46:39.859557    6660 out.go:298] Setting JSON to false
	I0723 07:46:39.875639    6660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4563,"bootTime":1721741436,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:46:39.875707    6660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:46:39.880290    6660 out.go:177] * [no-preload-918000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:46:39.888179    6660 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:46:39.888235    6660 notify.go:220] Checking for updates...
	I0723 07:46:39.896098    6660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:46:39.900187    6660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:46:39.903101    6660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:46:39.906161    6660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:46:39.909143    6660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:46:39.910572    6660 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:46:39.910636    6660 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:46:39.910689    6660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:46:39.915140    6660 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:46:39.921952    6660 start.go:297] selected driver: qemu2
	I0723 07:46:39.921959    6660 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:46:39.921965    6660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:46:39.924323    6660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:46:39.927134    6660 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:46:39.930188    6660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:46:39.930206    6660 cni.go:84] Creating CNI manager for ""
	I0723 07:46:39.930213    6660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:46:39.930217    6660 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:46:39.930251    6660 start.go:340] cluster config:
	{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:46:39.933991    6660 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.941130    6660 out.go:177] * Starting "no-preload-918000" primary control-plane node in "no-preload-918000" cluster
	I0723 07:46:39.945179    6660 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0723 07:46:39.945262    6660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/no-preload-918000/config.json ...
	I0723 07:46:39.945282    6660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/no-preload-918000/config.json: {Name:mk235c72770722baa0b5438fc81beadc2c3acb3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:46:39.945285    6660 cache.go:107] acquiring lock: {Name:mk65a64c1222dcf5a5836dc48db31002cffd4310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.945285    6660 cache.go:107] acquiring lock: {Name:mk04f086810e37107b9959982b35b9fc755383f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.945298    6660 cache.go:107] acquiring lock: {Name:mk919509eadfbb6f2054178ac2b2fa3cf069b64e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.945369    6660 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0723 07:46:39.945377    6660 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 93.584µs
	I0723 07:46:39.945393    6660 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0723 07:46:39.945405    6660 cache.go:107] acquiring lock: {Name:mk8bc7ca12e6128e749586513c3559a080e9bbdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.945501    6660 cache.go:107] acquiring lock: {Name:mkb323eb73acc416a6776def846d1c585a2e43b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.945481    6660 cache.go:107] acquiring lock: {Name:mk402feeb7415b1c80d254569dc230dc9eefc55c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.945526    6660 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 07:46:39.945520    6660 cache.go:107] acquiring lock: {Name:mkedd15eaccece819a00d85708a58d6bef20b3a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.945501    6660 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 07:46:39.945527    6660 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0723 07:46:39.945595    6660 cache.go:107] acquiring lock: {Name:mk7fe47afd267bdf21a6658cbe8da6f923d1f41b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:39.945667    6660 start.go:360] acquireMachinesLock for no-preload-918000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:39.945650    6660 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 07:46:39.945729    6660 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0723 07:46:39.945817    6660 start.go:364] duration metric: took 141.875µs to acquireMachinesLock for "no-preload-918000"
	I0723 07:46:39.945851    6660 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 07:46:39.945870    6660 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 07:46:39.945832    6660 start.go:93] Provisioning new machine with config: &{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:46:39.945892    6660 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:46:39.953987    6660 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:46:39.958000    6660 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 07:46:39.958091    6660 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 07:46:39.958147    6660 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0723 07:46:39.958581    6660 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 07:46:39.958618    6660 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 07:46:39.958663    6660 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0723 07:46:39.960160    6660 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 07:46:39.972385    6660 start.go:159] libmachine.API.Create for "no-preload-918000" (driver="qemu2")
	I0723 07:46:39.972414    6660 client.go:168] LocalClient.Create starting
	I0723 07:46:39.972491    6660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:46:39.972522    6660 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:39.972533    6660 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:39.972581    6660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:46:39.972607    6660 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:39.972618    6660 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:39.972999    6660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:46:40.132331    6660 main.go:141] libmachine: Creating SSH key...
	I0723 07:46:40.363422    6660 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0723 07:46:40.385723    6660 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0723 07:46:40.388070    6660 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0723 07:46:40.391851    6660 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0723 07:46:40.424851    6660 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0723 07:46:40.442897    6660 main.go:141] libmachine: Creating Disk image...
	I0723 07:46:40.442907    6660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:46:40.443104    6660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2
	I0723 07:46:40.453233    6660 main.go:141] libmachine: STDOUT: 
	I0723 07:46:40.453244    6660 main.go:141] libmachine: STDERR: 
	I0723 07:46:40.453290    6660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2 +20000M
	I0723 07:46:40.461514    6660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:46:40.461525    6660 main.go:141] libmachine: STDERR: 
	I0723 07:46:40.461535    6660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2
	I0723 07:46:40.461540    6660 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:46:40.461553    6660 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:40.461579    6660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:e2:a5:69:27:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2
	I0723 07:46:40.463348    6660 main.go:141] libmachine: STDOUT: 
	I0723 07:46:40.463363    6660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:40.463390    6660 client.go:171] duration metric: took 490.981959ms to LocalClient.Create
	I0723 07:46:40.484122    6660 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0723 07:46:40.485493    6660 cache.go:162] opening:  /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0723 07:46:40.543000    6660 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0723 07:46:40.543009    6660 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 597.567542ms
	I0723 07:46:40.543019    6660 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0723 07:46:42.463564    6660 start.go:128] duration metric: took 2.517701459s to createHost
	I0723 07:46:42.463627    6660 start.go:83] releasing machines lock for "no-preload-918000", held for 2.517849333s
	W0723 07:46:42.463695    6660 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:42.480921    6660 out.go:177] * Deleting "no-preload-918000" in qemu2 ...
	W0723 07:46:42.512729    6660 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:42.512825    6660 start.go:729] Will try again in 5 seconds ...
	I0723 07:46:43.325713    6660 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0723 07:46:43.325754    6660 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.380399208s
	I0723 07:46:43.325812    6660 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0723 07:46:43.398306    6660 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0723 07:46:43.398361    6660 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.4529505s
	I0723 07:46:43.398396    6660 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0723 07:46:44.155078    6660 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0723 07:46:44.155157    6660 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.209955542s
	I0723 07:46:44.155194    6660 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0723 07:46:44.716744    6660 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0723 07:46:44.716789    6660 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.771360625s
	I0723 07:46:44.716836    6660 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0723 07:46:44.732249    6660 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0723 07:46:44.732292    6660 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.787090875s
	I0723 07:46:44.732319    6660 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0723 07:46:46.511259    6660 cache.go:157] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0723 07:46:46.511309    6660 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 6.5660305s
	I0723 07:46:46.511356    6660 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0723 07:46:46.511396    6660 cache.go:87] Successfully saved all images to host disk.
	I0723 07:46:47.514990    6660 start.go:360] acquireMachinesLock for no-preload-918000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:47.515415    6660 start.go:364] duration metric: took 354.291µs to acquireMachinesLock for "no-preload-918000"
	I0723 07:46:47.515514    6660 start.go:93] Provisioning new machine with config: &{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:46:47.515772    6660 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:46:47.522483    6660 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:46:47.573588    6660 start.go:159] libmachine.API.Create for "no-preload-918000" (driver="qemu2")
	I0723 07:46:47.573651    6660 client.go:168] LocalClient.Create starting
	I0723 07:46:47.573775    6660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:46:47.573834    6660 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:47.573858    6660 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:47.573941    6660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:46:47.573991    6660 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:47.574007    6660 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:47.574582    6660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:46:47.741468    6660 main.go:141] libmachine: Creating SSH key...
	I0723 07:46:47.965244    6660 main.go:141] libmachine: Creating Disk image...
	I0723 07:46:47.965251    6660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:46:47.965448    6660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2
	I0723 07:46:47.975187    6660 main.go:141] libmachine: STDOUT: 
	I0723 07:46:47.975208    6660 main.go:141] libmachine: STDERR: 
	I0723 07:46:47.975274    6660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2 +20000M
	I0723 07:46:47.983257    6660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:46:47.983270    6660 main.go:141] libmachine: STDERR: 
	I0723 07:46:47.983280    6660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2
	I0723 07:46:47.983287    6660 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:46:47.983298    6660 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:47.983337    6660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f0:60:5c:32:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2
	I0723 07:46:47.984989    6660 main.go:141] libmachine: STDOUT: 
	I0723 07:46:47.985010    6660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:47.985024    6660 client.go:171] duration metric: took 411.377292ms to LocalClient.Create
	I0723 07:46:49.987290    6660 start.go:128] duration metric: took 2.471516208s to createHost
	I0723 07:46:49.987390    6660 start.go:83] releasing machines lock for "no-preload-918000", held for 2.472001375s
	W0723 07:46:49.987815    6660 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:50.001306    6660 out.go:177] 
	W0723 07:46:50.006435    6660 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:46:50.006469    6660 out.go:239] * 
	* 
	W0723 07:46:50.009050    6660 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:46:50.017190    6660 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (68.233583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-918000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-918000 create -f testdata/busybox.yaml: exit status 1 (29.908209ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-918000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.021625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.005833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-918000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-918000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-918000 describe deploy/metrics-server -n kube-system: exit status 1 (26.358125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-918000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (30.013583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.183092583s)

                                                
                                                
-- stdout --
	* [no-preload-918000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-918000" primary control-plane node in "no-preload-918000" cluster
	* Restarting existing qemu2 VM for "no-preload-918000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-918000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:46:52.274771    6731 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:46:52.274890    6731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:52.274894    6731 out.go:304] Setting ErrFile to fd 2...
	I0723 07:46:52.274896    6731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:52.275016    6731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:46:52.276059    6731 out.go:298] Setting JSON to false
	I0723 07:46:52.292021    6731 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4576,"bootTime":1721741436,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:46:52.292092    6731 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:46:52.296391    6731 out.go:177] * [no-preload-918000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:46:52.303640    6731 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:46:52.303700    6731 notify.go:220] Checking for updates...
	I0723 07:46:52.309595    6731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:46:52.312568    6731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:46:52.314024    6731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:46:52.321571    6731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:46:52.324610    6731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:46:52.326251    6731 config.go:182] Loaded profile config "no-preload-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0723 07:46:52.326519    6731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:46:52.330503    6731 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:46:52.337414    6731 start.go:297] selected driver: qemu2
	I0723 07:46:52.337422    6731 start.go:901] validating driver "qemu2" against &{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:46:52.337485    6731 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:46:52.339911    6731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:46:52.339964    6731 cni.go:84] Creating CNI manager for ""
	I0723 07:46:52.339975    6731 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:46:52.339994    6731 start.go:340] cluster config:
	{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-918000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:46:52.343787    6731 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.351566    6731 out.go:177] * Starting "no-preload-918000" primary control-plane node in "no-preload-918000" cluster
	I0723 07:46:52.355536    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0723 07:46:52.355615    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/no-preload-918000/config.json ...
	I0723 07:46:52.355627    6731 cache.go:107] acquiring lock: {Name:mk65a64c1222dcf5a5836dc48db31002cffd4310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.355640    6731 cache.go:107] acquiring lock: {Name:mk919509eadfbb6f2054178ac2b2fa3cf069b64e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.355678    6731 cache.go:107] acquiring lock: {Name:mkedd15eaccece819a00d85708a58d6bef20b3a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.355697    6731 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0723 07:46:52.355711    6731 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 77µs
	I0723 07:46:52.355718    6731 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0723 07:46:52.355727    6731 cache.go:107] acquiring lock: {Name:mk8bc7ca12e6128e749586513c3559a080e9bbdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.355727    6731 cache.go:107] acquiring lock: {Name:mk04f086810e37107b9959982b35b9fc755383f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.355730    6731 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0723 07:46:52.355751    6731 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 123.542µs
	I0723 07:46:52.355756    6731 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0723 07:46:52.355778    6731 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0723 07:46:52.355784    6731 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0723 07:46:52.355785    6731 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 59.083µs
	I0723 07:46:52.355789    6731 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0723 07:46:52.355788    6731 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 62.167µs
	I0723 07:46:52.355834    6731 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0723 07:46:52.355795    6731 cache.go:107] acquiring lock: {Name:mk7fe47afd267bdf21a6658cbe8da6f923d1f41b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.355810    6731 cache.go:107] acquiring lock: {Name:mk402feeb7415b1c80d254569dc230dc9eefc55c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.355802    6731 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0723 07:46:52.355873    6731 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 226.833µs
	I0723 07:46:52.355879    6731 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0723 07:46:52.355835    6731 cache.go:107] acquiring lock: {Name:mkb323eb73acc416a6776def846d1c585a2e43b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:52.355901    6731 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0723 07:46:52.355906    6731 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 145.416µs
	I0723 07:46:52.355915    6731 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0723 07:46:52.355922    6731 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0723 07:46:52.355927    6731 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 132.084µs
	I0723 07:46:52.355931    6731 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0723 07:46:52.355936    6731 cache.go:115] /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0723 07:46:52.355941    6731 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 127.292µs
	I0723 07:46:52.355945    6731 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0723 07:46:52.355948    6731 cache.go:87] Successfully saved all images to host disk.
	I0723 07:46:52.356000    6731 start.go:360] acquireMachinesLock for no-preload-918000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:52.356028    6731 start.go:364] duration metric: took 22.208µs to acquireMachinesLock for "no-preload-918000"
	I0723 07:46:52.356038    6731 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:46:52.356045    6731 fix.go:54] fixHost starting: 
	I0723 07:46:52.356166    6731 fix.go:112] recreateIfNeeded on no-preload-918000: state=Stopped err=<nil>
	W0723 07:46:52.356178    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:46:52.364535    6731 out.go:177] * Restarting existing qemu2 VM for "no-preload-918000" ...
	I0723 07:46:52.368575    6731 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:52.368608    6731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f0:60:5c:32:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2
	I0723 07:46:52.370656    6731 main.go:141] libmachine: STDOUT: 
	I0723 07:46:52.370674    6731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:52.370703    6731 fix.go:56] duration metric: took 14.660042ms for fixHost
	I0723 07:46:52.370708    6731 start.go:83] releasing machines lock for "no-preload-918000", held for 14.676792ms
	W0723 07:46:52.370723    6731 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:46:52.370752    6731 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:52.370756    6731 start.go:729] Will try again in 5 seconds ...
	I0723 07:46:57.372835    6731 start.go:360] acquireMachinesLock for no-preload-918000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:57.373232    6731 start.go:364] duration metric: took 318.5µs to acquireMachinesLock for "no-preload-918000"
	I0723 07:46:57.373341    6731 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:46:57.373361    6731 fix.go:54] fixHost starting: 
	I0723 07:46:57.374038    6731 fix.go:112] recreateIfNeeded on no-preload-918000: state=Stopped err=<nil>
	W0723 07:46:57.374070    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:46:57.378496    6731 out.go:177] * Restarting existing qemu2 VM for "no-preload-918000" ...
	I0723 07:46:57.385416    6731 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:57.385636    6731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f0:60:5c:32:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/no-preload-918000/disk.qcow2
	I0723 07:46:57.394366    6731 main.go:141] libmachine: STDOUT: 
	I0723 07:46:57.394427    6731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:57.394493    6731 fix.go:56] duration metric: took 21.129333ms for fixHost
	I0723 07:46:57.394512    6731 start.go:83] releasing machines lock for "no-preload-918000", held for 21.2555ms
	W0723 07:46:57.394679    6731 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-918000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-918000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:46:57.403401    6731 out.go:177] 
	W0723 07:46:57.406482    6731 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:46:57.406506    6731 out.go:239] * 
	* 
	W0723 07:46:57.409188    6731 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:46:57.416435    6731 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (70.628416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-918000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (32.618291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-918000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.632625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.261208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-918000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (30.00475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-918000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-918000 --alsologtostderr -v=1: exit status 83 (41.518292ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-918000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-918000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:46:57.689855    6750 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:46:57.689997    6750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:57.690000    6750 out.go:304] Setting ErrFile to fd 2...
	I0723 07:46:57.690002    6750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:57.690135    6750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:46:57.690367    6750 out.go:298] Setting JSON to false
	I0723 07:46:57.690373    6750 mustload.go:65] Loading cluster: no-preload-918000
	I0723 07:46:57.690546    6750 config.go:182] Loaded profile config "no-preload-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0723 07:46:57.694702    6750 out.go:177] * The control-plane node no-preload-918000 host is not running: state=Stopped
	I0723 07:46:57.697570    6750 out.go:177]   To start a cluster, run: "minikube start -p no-preload-918000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-918000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.700791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.835709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-482000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-482000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.773482125s)

                                                
                                                
-- stdout --
	* [embed-certs-482000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-482000" primary control-plane node in "embed-certs-482000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-482000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:46:57.997772    6767 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:46:57.997886    6767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:57.997888    6767 out.go:304] Setting ErrFile to fd 2...
	I0723 07:46:57.997890    6767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:46:57.998015    6767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:46:57.999072    6767 out.go:298] Setting JSON to false
	I0723 07:46:58.015252    6767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4582,"bootTime":1721741436,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:46:58.015325    6767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:46:58.019671    6767 out.go:177] * [embed-certs-482000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:46:58.027702    6767 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:46:58.027791    6767 notify.go:220] Checking for updates...
	I0723 07:46:58.033647    6767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:46:58.036694    6767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:46:58.039586    6767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:46:58.042686    6767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:46:58.045678    6767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:46:58.047378    6767 config.go:182] Loaded profile config "cert-expiration-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:46:58.047443    6767 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:46:58.047486    6767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:46:58.051598    6767 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:46:58.058513    6767 start.go:297] selected driver: qemu2
	I0723 07:46:58.058520    6767 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:46:58.058527    6767 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:46:58.060634    6767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:46:58.063644    6767 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:46:58.066728    6767 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:46:58.066745    6767 cni.go:84] Creating CNI manager for ""
	I0723 07:46:58.066752    6767 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:46:58.066756    6767 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:46:58.066781    6767 start.go:340] cluster config:
	{Name:embed-certs-482000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-482000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:46:58.070288    6767 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:46:58.077642    6767 out.go:177] * Starting "embed-certs-482000" primary control-plane node in "embed-certs-482000" cluster
	I0723 07:46:58.081656    6767 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:46:58.081669    6767 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:46:58.081680    6767 cache.go:56] Caching tarball of preloaded images
	I0723 07:46:58.081733    6767 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:46:58.081739    6767 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:46:58.081803    6767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/embed-certs-482000/config.json ...
	I0723 07:46:58.081814    6767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/embed-certs-482000/config.json: {Name:mk0f2832b68c908e8ada29513eec00f3db2c20ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:46:58.082044    6767 start.go:360] acquireMachinesLock for embed-certs-482000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:46:58.082079    6767 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "embed-certs-482000"
	I0723 07:46:58.082093    6767 start.go:93] Provisioning new machine with config: &{Name:embed-certs-482000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-482000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:46:58.082134    6767 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:46:58.090633    6767 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:46:58.107808    6767 start.go:159] libmachine.API.Create for "embed-certs-482000" (driver="qemu2")
	I0723 07:46:58.107835    6767 client.go:168] LocalClient.Create starting
	I0723 07:46:58.107893    6767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:46:58.107924    6767 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:58.107934    6767 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:58.107984    6767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:46:58.108008    6767 main.go:141] libmachine: Decoding PEM data...
	I0723 07:46:58.108018    6767 main.go:141] libmachine: Parsing certificate...
	I0723 07:46:58.108375    6767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:46:58.265034    6767 main.go:141] libmachine: Creating SSH key...
	I0723 07:46:58.324250    6767 main.go:141] libmachine: Creating Disk image...
	I0723 07:46:58.324259    6767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:46:58.324427    6767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2
	I0723 07:46:58.333290    6767 main.go:141] libmachine: STDOUT: 
	I0723 07:46:58.333308    6767 main.go:141] libmachine: STDERR: 
	I0723 07:46:58.333355    6767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2 +20000M
	I0723 07:46:58.341137    6767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:46:58.341151    6767 main.go:141] libmachine: STDERR: 
	I0723 07:46:58.341161    6767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2
	I0723 07:46:58.341166    6767 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:46:58.341179    6767 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:46:58.341207    6767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:59:b6:8a:64:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2
	I0723 07:46:58.342808    6767 main.go:141] libmachine: STDOUT: 
	I0723 07:46:58.342824    6767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:46:58.342840    6767 client.go:171] duration metric: took 235.007375ms to LocalClient.Create
	I0723 07:47:00.344979    6767 start.go:128] duration metric: took 2.26286925s to createHost
	I0723 07:47:00.345028    6767 start.go:83] releasing machines lock for "embed-certs-482000", held for 2.262984166s
	W0723 07:47:00.345092    6767 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:00.355132    6767 out.go:177] * Deleting "embed-certs-482000" in qemu2 ...
	W0723 07:47:00.387503    6767 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:00.387528    6767 start.go:729] Will try again in 5 seconds ...
	I0723 07:47:05.389634    6767 start.go:360] acquireMachinesLock for embed-certs-482000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:05.390105    6767 start.go:364] duration metric: took 377.791µs to acquireMachinesLock for "embed-certs-482000"
	I0723 07:47:05.390235    6767 start.go:93] Provisioning new machine with config: &{Name:embed-certs-482000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-482000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:47:05.390513    6767 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:47:05.406254    6767 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:47:05.457194    6767 start.go:159] libmachine.API.Create for "embed-certs-482000" (driver="qemu2")
	I0723 07:47:05.457242    6767 client.go:168] LocalClient.Create starting
	I0723 07:47:05.457369    6767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:47:05.457433    6767 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:05.457449    6767 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:05.457509    6767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:47:05.457553    6767 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:05.457567    6767 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:05.458174    6767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:47:05.624976    6767 main.go:141] libmachine: Creating SSH key...
	I0723 07:47:05.677562    6767 main.go:141] libmachine: Creating Disk image...
	I0723 07:47:05.677567    6767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:47:05.677732    6767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2
	I0723 07:47:05.686780    6767 main.go:141] libmachine: STDOUT: 
	I0723 07:47:05.686797    6767 main.go:141] libmachine: STDERR: 
	I0723 07:47:05.686849    6767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2 +20000M
	I0723 07:47:05.694551    6767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:47:05.694564    6767 main.go:141] libmachine: STDERR: 
	I0723 07:47:05.694574    6767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2
	I0723 07:47:05.694579    6767 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:47:05.694593    6767 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:05.694623    6767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:46:3f:83:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2
	I0723 07:47:05.696154    6767 main.go:141] libmachine: STDOUT: 
	I0723 07:47:05.696167    6767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:05.696180    6767 client.go:171] duration metric: took 238.936584ms to LocalClient.Create
	I0723 07:47:07.698411    6767 start.go:128] duration metric: took 2.307903666s to createHost
	I0723 07:47:07.698551    6767 start.go:83] releasing machines lock for "embed-certs-482000", held for 2.308402166s
	W0723 07:47:07.698926    6767 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-482000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-482000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:07.712569    6767 out.go:177] 
	W0723 07:47:07.716892    6767 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:07.716946    6767 out.go:239] * 
	* 
	W0723 07:47:07.719537    6767 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:47:07.729567    6767 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-482000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (65.48675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-482000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-482000 create -f testdata/busybox.yaml: exit status 1 (29.958583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-482000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-482000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (29.414792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (28.772083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-482000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-482000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-482000 describe deploy/metrics-server -n kube-system: exit status 1 (26.422167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-482000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-482000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (29.6105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-482000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-482000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.17222475s)

                                                
                                                
-- stdout --
	* [embed-certs-482000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-482000" primary control-plane node in "embed-certs-482000" cluster
	* Restarting existing qemu2 VM for "embed-certs-482000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-482000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:47:11.666564    6826 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:47:11.666683    6826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:11.666686    6826 out.go:304] Setting ErrFile to fd 2...
	I0723 07:47:11.666689    6826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:11.666810    6826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:47:11.667802    6826 out.go:298] Setting JSON to false
	I0723 07:47:11.683641    6826 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4595,"bootTime":1721741436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:47:11.683716    6826 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:47:11.688643    6826 out.go:177] * [embed-certs-482000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:47:11.694588    6826 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:47:11.694648    6826 notify.go:220] Checking for updates...
	I0723 07:47:11.701594    6826 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:47:11.704580    6826 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:47:11.707605    6826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:47:11.708957    6826 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:47:11.711544    6826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:47:11.714814    6826 config.go:182] Loaded profile config "embed-certs-482000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:47:11.715092    6826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:47:11.719345    6826 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:47:11.726624    6826 start.go:297] selected driver: qemu2
	I0723 07:47:11.726634    6826 start.go:901] validating driver "qemu2" against &{Name:embed-certs-482000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-482000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:47:11.726699    6826 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:47:11.728861    6826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:47:11.728885    6826 cni.go:84] Creating CNI manager for ""
	I0723 07:47:11.728892    6826 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:47:11.728919    6826 start.go:340] cluster config:
	{Name:embed-certs-482000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-482000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:47:11.732300    6826 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:47:11.739512    6826 out.go:177] * Starting "embed-certs-482000" primary control-plane node in "embed-certs-482000" cluster
	I0723 07:47:11.743557    6826 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:47:11.743570    6826 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:47:11.743576    6826 cache.go:56] Caching tarball of preloaded images
	I0723 07:47:11.743626    6826 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:47:11.743631    6826 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:47:11.743676    6826 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/embed-certs-482000/config.json ...
	I0723 07:47:11.744070    6826 start.go:360] acquireMachinesLock for embed-certs-482000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:11.744098    6826 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "embed-certs-482000"
	I0723 07:47:11.744107    6826 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:47:11.744112    6826 fix.go:54] fixHost starting: 
	I0723 07:47:11.744223    6826 fix.go:112] recreateIfNeeded on embed-certs-482000: state=Stopped err=<nil>
	W0723 07:47:11.744231    6826 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:47:11.752559    6826 out.go:177] * Restarting existing qemu2 VM for "embed-certs-482000" ...
	I0723 07:47:11.756465    6826 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:11.756504    6826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:46:3f:83:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2
	I0723 07:47:11.758453    6826 main.go:141] libmachine: STDOUT: 
	I0723 07:47:11.758470    6826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:11.758499    6826 fix.go:56] duration metric: took 14.386084ms for fixHost
	I0723 07:47:11.758502    6826 start.go:83] releasing machines lock for "embed-certs-482000", held for 14.400875ms
	W0723 07:47:11.758509    6826 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:11.758534    6826 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:11.758539    6826 start.go:729] Will try again in 5 seconds ...
	I0723 07:47:16.760535    6826 start.go:360] acquireMachinesLock for embed-certs-482000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:16.760603    6826 start.go:364] duration metric: took 52.209µs to acquireMachinesLock for "embed-certs-482000"
	I0723 07:47:16.760612    6826 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:47:16.760616    6826 fix.go:54] fixHost starting: 
	I0723 07:47:16.760759    6826 fix.go:112] recreateIfNeeded on embed-certs-482000: state=Stopped err=<nil>
	W0723 07:47:16.760764    6826 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:47:16.768524    6826 out.go:177] * Restarting existing qemu2 VM for "embed-certs-482000" ...
	I0723 07:47:16.775551    6826 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:16.775599    6826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:46:3f:83:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/embed-certs-482000/disk.qcow2
	I0723 07:47:16.777703    6826 main.go:141] libmachine: STDOUT: 
	I0723 07:47:16.777725    6826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:16.777741    6826 fix.go:56] duration metric: took 17.126042ms for fixHost
	I0723 07:47:16.777744    6826 start.go:83] releasing machines lock for "embed-certs-482000", held for 17.137666ms
	W0723 07:47:16.777791    6826 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-482000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-482000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:16.789501    6826 out.go:177] 
	W0723 07:47:16.793574    6826 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:16.793580    6826 out.go:239] * 
	* 
	W0723 07:47:16.794057    6826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:47:16.803406    6826 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-482000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (28.880625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-482000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (30.018417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-482000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-482000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-482000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.76625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-482000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-482000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (31.375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-374000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-374000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.836134875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-374000" primary control-plane node in "default-k8s-diff-port-374000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-374000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:47:16.923329    6855 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:47:16.923492    6855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:16.923496    6855 out.go:304] Setting ErrFile to fd 2...
	I0723 07:47:16.923498    6855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:16.923637    6855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:47:16.925142    6855 out.go:298] Setting JSON to false
	I0723 07:47:16.943038    6855 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4600,"bootTime":1721741436,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:47:16.943105    6855 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:47:16.947503    6855 out.go:177] * [default-k8s-diff-port-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:47:16.952603    6855 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:47:16.952602    6855 notify.go:220] Checking for updates...
	I0723 07:47:16.959523    6855 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:47:16.969637    6855 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:47:16.976524    6855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:47:16.984555    6855 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:47:16.992415    6855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:47:16.996910    6855 config.go:182] Loaded profile config "embed-certs-482000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:47:16.996972    6855 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:47:16.997020    6855 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:47:17.000578    6855 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:47:17.008504    6855 start.go:297] selected driver: qemu2
	I0723 07:47:17.008509    6855 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:47:17.008515    6855 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:47:17.010834    6855 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 07:47:17.015513    6855 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:47:17.019639    6855 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:47:17.019675    6855 cni.go:84] Creating CNI manager for ""
	I0723 07:47:17.019683    6855 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:47:17.019687    6855 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:47:17.019718    6855 start.go:340] cluster config:
	{Name:default-k8s-diff-port-374000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:47:17.023411    6855 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:47:17.031808    6855 out.go:177] * Starting "default-k8s-diff-port-374000" primary control-plane node in "default-k8s-diff-port-374000" cluster
	I0723 07:47:17.035427    6855 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:47:17.035450    6855 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:47:17.035459    6855 cache.go:56] Caching tarball of preloaded images
	I0723 07:47:17.035544    6855 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:47:17.035550    6855 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:47:17.035619    6855 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/default-k8s-diff-port-374000/config.json ...
	I0723 07:47:17.035631    6855 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/default-k8s-diff-port-374000/config.json: {Name:mk93468c4161535648b127b57a9e5e6d9de10e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:47:17.035937    6855 start.go:360] acquireMachinesLock for default-k8s-diff-port-374000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:17.035976    6855 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "default-k8s-diff-port-374000"
	I0723 07:47:17.035989    6855 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:47:17.036028    6855 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:47:17.042518    6855 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:47:17.058556    6855 start.go:159] libmachine.API.Create for "default-k8s-diff-port-374000" (driver="qemu2")
	I0723 07:47:17.058591    6855 client.go:168] LocalClient.Create starting
	I0723 07:47:17.058659    6855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:47:17.058688    6855 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:17.058700    6855 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:17.058736    6855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:47:17.058759    6855 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:17.058765    6855 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:17.059127    6855 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:47:17.263259    6855 main.go:141] libmachine: Creating SSH key...
	I0723 07:47:17.314388    6855 main.go:141] libmachine: Creating Disk image...
	I0723 07:47:17.314397    6855 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:47:17.314565    6855 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2
	I0723 07:47:17.324598    6855 main.go:141] libmachine: STDOUT: 
	I0723 07:47:17.324628    6855 main.go:141] libmachine: STDERR: 
	I0723 07:47:17.324680    6855 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2 +20000M
	I0723 07:47:17.333216    6855 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:47:17.333248    6855 main.go:141] libmachine: STDERR: 
	I0723 07:47:17.333275    6855 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2
	I0723 07:47:17.333280    6855 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:47:17.333297    6855 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:17.333324    6855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:59:e5:49:ab:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2
	I0723 07:47:17.335257    6855 main.go:141] libmachine: STDOUT: 
	I0723 07:47:17.335292    6855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:17.335311    6855 client.go:171] duration metric: took 276.721292ms to LocalClient.Create
	I0723 07:47:19.337519    6855 start.go:128] duration metric: took 2.301468542s to createHost
	I0723 07:47:19.337629    6855 start.go:83] releasing machines lock for "default-k8s-diff-port-374000", held for 2.301686208s
	W0723 07:47:19.337670    6855 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:19.355702    6855 out.go:177] * Deleting "default-k8s-diff-port-374000" in qemu2 ...
	W0723 07:47:19.377970    6855 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:19.377991    6855 start.go:729] Will try again in 5 seconds ...
	I0723 07:47:24.379533    6855 start.go:360] acquireMachinesLock for default-k8s-diff-port-374000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:24.379979    6855 start.go:364] duration metric: took 328.459µs to acquireMachinesLock for "default-k8s-diff-port-374000"
	I0723 07:47:24.380121    6855 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:47:24.380425    6855 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:47:24.392333    6855 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:47:24.442323    6855 start.go:159] libmachine.API.Create for "default-k8s-diff-port-374000" (driver="qemu2")
	I0723 07:47:24.442371    6855 client.go:168] LocalClient.Create starting
	I0723 07:47:24.442481    6855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:47:24.442546    6855 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:24.442561    6855 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:24.442634    6855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:47:24.442678    6855 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:24.442689    6855 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:24.443191    6855 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:47:24.605834    6855 main.go:141] libmachine: Creating SSH key...
	I0723 07:47:24.665061    6855 main.go:141] libmachine: Creating Disk image...
	I0723 07:47:24.665069    6855 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:47:24.665227    6855 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2
	I0723 07:47:24.674360    6855 main.go:141] libmachine: STDOUT: 
	I0723 07:47:24.674382    6855 main.go:141] libmachine: STDERR: 
	I0723 07:47:24.674445    6855 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2 +20000M
	I0723 07:47:24.682228    6855 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:47:24.682243    6855 main.go:141] libmachine: STDERR: 
	I0723 07:47:24.682262    6855 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2
	I0723 07:47:24.682267    6855 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:47:24.682279    6855 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:24.682303    6855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:28:fe:e4:53:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2
	I0723 07:47:24.683925    6855 main.go:141] libmachine: STDOUT: 
	I0723 07:47:24.683942    6855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:24.683954    6855 client.go:171] duration metric: took 241.581542ms to LocalClient.Create
	I0723 07:47:26.686011    6855 start.go:128] duration metric: took 2.305612959s to createHost
	I0723 07:47:26.686048    6855 start.go:83] releasing machines lock for "default-k8s-diff-port-374000", held for 2.306091125s
	W0723 07:47:26.686209    6855 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:26.702561    6855 out.go:177] 
	W0723 07:47:26.706650    6855 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:26.706663    6855 out.go:239] * 
	* 
	W0723 07:47:26.707774    6855 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:47:26.719574    6855 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-374000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (37.623833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-482000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (30.594416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-482000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-482000 --alsologtostderr -v=1: exit status 83 (50.290583ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-482000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-482000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:47:17.073004    6868 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:47:17.073171    6868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:17.073177    6868 out.go:304] Setting ErrFile to fd 2...
	I0723 07:47:17.073179    6868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:17.073325    6868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:47:17.073551    6868 out.go:298] Setting JSON to false
	I0723 07:47:17.073560    6868 mustload.go:65] Loading cluster: embed-certs-482000
	I0723 07:47:17.073791    6868 config.go:182] Loaded profile config "embed-certs-482000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:47:17.081595    6868 out.go:177] * The control-plane node embed-certs-482000 host is not running: state=Stopped
	I0723 07:47:17.088531    6868 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-482000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-482000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (32.032292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (32.231709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-482000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-498000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-498000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (11.765036208s)

                                                
                                                
-- stdout --
	* [newest-cni-498000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-498000" primary control-plane node in "newest-cni-498000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-498000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:47:17.429211    6888 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:47:17.429419    6888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:17.429422    6888 out.go:304] Setting ErrFile to fd 2...
	I0723 07:47:17.429424    6888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:17.429563    6888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:47:17.430608    6888 out.go:298] Setting JSON to false
	I0723 07:47:17.446429    6888 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4601,"bootTime":1721741436,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:47:17.446495    6888 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:47:17.451598    6888 out.go:177] * [newest-cni-498000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:47:17.458580    6888 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:47:17.458619    6888 notify.go:220] Checking for updates...
	I0723 07:47:17.464443    6888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:47:17.467503    6888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:47:17.470581    6888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:47:17.473504    6888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:47:17.476551    6888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:47:17.479791    6888 config.go:182] Loaded profile config "default-k8s-diff-port-374000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:47:17.479852    6888 config.go:182] Loaded profile config "multinode-887000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:47:17.479900    6888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:47:17.483493    6888 out.go:177] * Using the qemu2 driver based on user configuration
	I0723 07:47:17.490596    6888 start.go:297] selected driver: qemu2
	I0723 07:47:17.490604    6888 start.go:901] validating driver "qemu2" against <nil>
	I0723 07:47:17.490611    6888 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:47:17.492731    6888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0723 07:47:17.492755    6888 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0723 07:47:17.499456    6888 out.go:177] * Automatically selected the socket_vmnet network
	I0723 07:47:17.502619    6888 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0723 07:47:17.502635    6888 cni.go:84] Creating CNI manager for ""
	I0723 07:47:17.502644    6888 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:47:17.502653    6888 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 07:47:17.502680    6888 start.go:340] cluster config:
	{Name:newest-cni-498000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:47:17.506160    6888 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:47:17.513529    6888 out.go:177] * Starting "newest-cni-498000" primary control-plane node in "newest-cni-498000" cluster
	I0723 07:47:17.517481    6888 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0723 07:47:17.517496    6888 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0723 07:47:17.517503    6888 cache.go:56] Caching tarball of preloaded images
	I0723 07:47:17.517574    6888 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:47:17.517581    6888 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0723 07:47:17.517655    6888 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/newest-cni-498000/config.json ...
	I0723 07:47:17.517669    6888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/newest-cni-498000/config.json: {Name:mk4b5a84dd351c9366de6590d9a605fccdb90ef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 07:47:17.517899    6888 start.go:360] acquireMachinesLock for newest-cni-498000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:19.337745    6888 start.go:364] duration metric: took 1.819862292s to acquireMachinesLock for "newest-cni-498000"
	I0723 07:47:19.337892    6888 start.go:93] Provisioning new machine with config: &{Name:newest-cni-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:47:19.338133    6888 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:47:19.347636    6888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:47:19.395659    6888 start.go:159] libmachine.API.Create for "newest-cni-498000" (driver="qemu2")
	I0723 07:47:19.395706    6888 client.go:168] LocalClient.Create starting
	I0723 07:47:19.395835    6888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:47:19.395894    6888 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:19.395911    6888 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:19.395985    6888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:47:19.396040    6888 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:19.396054    6888 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:19.396664    6888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:47:19.568384    6888 main.go:141] libmachine: Creating SSH key...
	I0723 07:47:19.596809    6888 main.go:141] libmachine: Creating Disk image...
	I0723 07:47:19.596815    6888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:47:19.596976    6888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2
	I0723 07:47:19.606159    6888 main.go:141] libmachine: STDOUT: 
	I0723 07:47:19.606181    6888 main.go:141] libmachine: STDERR: 
	I0723 07:47:19.606229    6888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2 +20000M
	I0723 07:47:19.614151    6888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:47:19.614165    6888 main.go:141] libmachine: STDERR: 
	I0723 07:47:19.614179    6888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2
	I0723 07:47:19.614182    6888 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:47:19.614194    6888 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:19.614221    6888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:dc:97:90:fe:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2
	I0723 07:47:19.615791    6888 main.go:141] libmachine: STDOUT: 
	I0723 07:47:19.615806    6888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:19.615829    6888 client.go:171] duration metric: took 220.121666ms to LocalClient.Create
	I0723 07:47:21.617960    6888 start.go:128] duration metric: took 2.279820833s to createHost
	I0723 07:47:21.618022    6888 start.go:83] releasing machines lock for "newest-cni-498000", held for 2.280289584s
	W0723 07:47:21.618128    6888 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:21.636291    6888 out.go:177] * Deleting "newest-cni-498000" in qemu2 ...
	W0723 07:47:21.671647    6888 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:21.671672    6888 start.go:729] Will try again in 5 seconds ...
	I0723 07:47:26.673730    6888 start.go:360] acquireMachinesLock for newest-cni-498000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:26.686110    6888 start.go:364] duration metric: took 12.328333ms to acquireMachinesLock for "newest-cni-498000"
	I0723 07:47:26.686183    6888 start.go:93] Provisioning new machine with config: &{Name:newest-cni-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0723 07:47:26.686325    6888 start.go:125] createHost starting for "" (driver="qemu2")
	I0723 07:47:26.695669    6888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 07:47:26.729122    6888 start.go:159] libmachine.API.Create for "newest-cni-498000" (driver="qemu2")
	I0723 07:47:26.729161    6888 client.go:168] LocalClient.Create starting
	I0723 07:47:26.729252    6888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/ca.pem
	I0723 07:47:26.729296    6888 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:26.729313    6888 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:26.729380    6888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19319-1567/.minikube/certs/cert.pem
	I0723 07:47:26.729415    6888 main.go:141] libmachine: Decoding PEM data...
	I0723 07:47:26.729426    6888 main.go:141] libmachine: Parsing certificate...
	I0723 07:47:26.729856    6888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0723 07:47:26.893091    6888 main.go:141] libmachine: Creating SSH key...
	I0723 07:47:27.110559    6888 main.go:141] libmachine: Creating Disk image...
	I0723 07:47:27.110566    6888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0723 07:47:27.110755    6888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2.raw /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2
	I0723 07:47:27.119674    6888 main.go:141] libmachine: STDOUT: 
	I0723 07:47:27.119691    6888 main.go:141] libmachine: STDERR: 
	I0723 07:47:27.119737    6888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2 +20000M
	I0723 07:47:27.127499    6888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0723 07:47:27.127513    6888 main.go:141] libmachine: STDERR: 
	I0723 07:47:27.127525    6888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2
	I0723 07:47:27.127530    6888 main.go:141] libmachine: Starting QEMU VM...
	I0723 07:47:27.127539    6888 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:27.127575    6888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:66:f9:69:df:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2
	I0723 07:47:27.129129    6888 main.go:141] libmachine: STDOUT: 
	I0723 07:47:27.129144    6888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:27.129158    6888 client.go:171] duration metric: took 399.997542ms to LocalClient.Create
	I0723 07:47:29.131370    6888 start.go:128] duration metric: took 2.445022042s to createHost
	I0723 07:47:29.131421    6888 start.go:83] releasing machines lock for "newest-cni-498000", held for 2.445341834s
	W0723 07:47:29.131868    6888 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-498000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-498000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:29.136887    6888 out.go:177] 
	W0723 07:47:29.143639    6888 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:29.143678    6888 out.go:239] * 
	* 
	W0723 07:47:29.146531    6888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:47:29.154542    6888 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-498000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000: exit status 7 (65.94925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-374000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-374000 create -f testdata/busybox.yaml: exit status 1 (30.385167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-374000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-374000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (32.897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (31.952167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-374000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-374000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-374000 describe deploy/metrics-server -n kube-system: exit status 1 (27.88225ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-374000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-374000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (29.610125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-374000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-374000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.18385625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-374000" primary control-plane node in "default-k8s-diff-port-374000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-374000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-374000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:47:30.262174    6953 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:47:30.262301    6953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:30.262304    6953 out.go:304] Setting ErrFile to fd 2...
	I0723 07:47:30.262307    6953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:30.262419    6953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:47:30.263426    6953 out.go:298] Setting JSON to false
	I0723 07:47:30.279282    6953 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4614,"bootTime":1721741436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:47:30.279355    6953 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:47:30.284331    6953 out.go:177] * [default-k8s-diff-port-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:47:30.290348    6953 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:47:30.290386    6953 notify.go:220] Checking for updates...
	I0723 07:47:30.297337    6953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:47:30.300385    6953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:47:30.303333    6953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:47:30.306353    6953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:47:30.307577    6953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:47:30.310567    6953 config.go:182] Loaded profile config "default-k8s-diff-port-374000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:47:30.310849    6953 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:47:30.314294    6953 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:47:30.319341    6953 start.go:297] selected driver: qemu2
	I0723 07:47:30.319351    6953 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:47:30.319423    6953 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:47:30.321792    6953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 07:47:30.321834    6953 cni.go:84] Creating CNI manager for ""
	I0723 07:47:30.321842    6953 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:47:30.321882    6953 start.go:340] cluster config:
	{Name:default-k8s-diff-port-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-374000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:47:30.325342    6953 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:47:30.333327    6953 out.go:177] * Starting "default-k8s-diff-port-374000" primary control-plane node in "default-k8s-diff-port-374000" cluster
	I0723 07:47:30.337318    6953 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 07:47:30.337334    6953 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 07:47:30.337344    6953 cache.go:56] Caching tarball of preloaded images
	I0723 07:47:30.337399    6953 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:47:30.337405    6953 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0723 07:47:30.337470    6953 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/default-k8s-diff-port-374000/config.json ...
	I0723 07:47:30.337873    6953 start.go:360] acquireMachinesLock for default-k8s-diff-port-374000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:30.337898    6953 start.go:364] duration metric: took 19.5µs to acquireMachinesLock for "default-k8s-diff-port-374000"
	I0723 07:47:30.337907    6953 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:47:30.337914    6953 fix.go:54] fixHost starting: 
	I0723 07:47:30.338016    6953 fix.go:112] recreateIfNeeded on default-k8s-diff-port-374000: state=Stopped err=<nil>
	W0723 07:47:30.338024    6953 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:47:30.342320    6953 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-374000" ...
	I0723 07:47:30.350289    6953 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:30.350329    6953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:28:fe:e4:53:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2
	I0723 07:47:30.352191    6953 main.go:141] libmachine: STDOUT: 
	I0723 07:47:30.352208    6953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:30.352234    6953 fix.go:56] duration metric: took 14.320875ms for fixHost
	I0723 07:47:30.352240    6953 start.go:83] releasing machines lock for "default-k8s-diff-port-374000", held for 14.337541ms
	W0723 07:47:30.352249    6953 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:30.352282    6953 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:30.352286    6953 start.go:729] Will try again in 5 seconds ...
	I0723 07:47:35.354394    6953 start.go:360] acquireMachinesLock for default-k8s-diff-port-374000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:35.354809    6953 start.go:364] duration metric: took 300.208µs to acquireMachinesLock for "default-k8s-diff-port-374000"
	I0723 07:47:35.354939    6953 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:47:35.354961    6953 fix.go:54] fixHost starting: 
	I0723 07:47:35.355699    6953 fix.go:112] recreateIfNeeded on default-k8s-diff-port-374000: state=Stopped err=<nil>
	W0723 07:47:35.355723    6953 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:47:35.365229    6953 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-374000" ...
	I0723 07:47:35.369269    6953 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:35.369477    6953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:28:fe:e4:53:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/default-k8s-diff-port-374000/disk.qcow2
	I0723 07:47:35.378402    6953 main.go:141] libmachine: STDOUT: 
	I0723 07:47:35.378461    6953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:35.378567    6953 fix.go:56] duration metric: took 23.58425ms for fixHost
	I0723 07:47:35.378586    6953 start.go:83] releasing machines lock for "default-k8s-diff-port-374000", held for 23.75725ms
	W0723 07:47:35.378792    6953 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-374000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-374000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:35.390351    6953 out.go:177] 
	W0723 07:47:35.394386    6953 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:35.394429    6953 out.go:239] * 
	* 
	W0723 07:47:35.396818    6953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:47:35.405259    6953 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-374000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (63.433792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-498000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-498000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.180344875s)

                                                
                                                
-- stdout --
	* [newest-cni-498000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-498000" primary control-plane node in "newest-cni-498000" cluster
	* Restarting existing qemu2 VM for "newest-cni-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:47:32.822596    6974 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:47:32.822738    6974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:32.822742    6974 out.go:304] Setting ErrFile to fd 2...
	I0723 07:47:32.822744    6974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:32.822879    6974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:47:32.823843    6974 out.go:298] Setting JSON to false
	I0723 07:47:32.839747    6974 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4616,"bootTime":1721741436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:47:32.839827    6974 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:47:32.844934    6974 out.go:177] * [newest-cni-498000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:47:32.851976    6974 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:47:32.852017    6974 notify.go:220] Checking for updates...
	I0723 07:47:32.858974    6974 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:47:32.861935    6974 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:47:32.864946    6974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:47:32.867958    6974 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:47:32.870851    6974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:47:32.874181    6974 config.go:182] Loaded profile config "newest-cni-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0723 07:47:32.874464    6974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:47:32.878901    6974 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:47:32.885930    6974 start.go:297] selected driver: qemu2
	I0723 07:47:32.885937    6974 start.go:901] validating driver "qemu2" against &{Name:newest-cni-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:47:32.886004    6974 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:47:32.888365    6974 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0723 07:47:32.888389    6974 cni.go:84] Creating CNI manager for ""
	I0723 07:47:32.888396    6974 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 07:47:32.888422    6974 start.go:340] cluster config:
	{Name:newest-cni-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-498000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:47:32.891966    6974 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 07:47:32.897881    6974 out.go:177] * Starting "newest-cni-498000" primary control-plane node in "newest-cni-498000" cluster
	I0723 07:47:32.901939    6974 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0723 07:47:32.901955    6974 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0723 07:47:32.901967    6974 cache.go:56] Caching tarball of preloaded images
	I0723 07:47:32.902031    6974 preload.go:172] Found /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0723 07:47:32.902037    6974 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0723 07:47:32.902101    6974 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/newest-cni-498000/config.json ...
	I0723 07:47:32.902537    6974 start.go:360] acquireMachinesLock for newest-cni-498000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:32.902564    6974 start.go:364] duration metric: took 21.834µs to acquireMachinesLock for "newest-cni-498000"
	I0723 07:47:32.902574    6974 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:47:32.902579    6974 fix.go:54] fixHost starting: 
	I0723 07:47:32.902703    6974 fix.go:112] recreateIfNeeded on newest-cni-498000: state=Stopped err=<nil>
	W0723 07:47:32.902713    6974 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:47:32.906931    6974 out.go:177] * Restarting existing qemu2 VM for "newest-cni-498000" ...
	I0723 07:47:32.914911    6974 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:32.914947    6974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:66:f9:69:df:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2
	I0723 07:47:32.916957    6974 main.go:141] libmachine: STDOUT: 
	I0723 07:47:32.916975    6974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:32.917006    6974 fix.go:56] duration metric: took 14.426334ms for fixHost
	I0723 07:47:32.917011    6974 start.go:83] releasing machines lock for "newest-cni-498000", held for 14.442792ms
	W0723 07:47:32.917018    6974 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:32.917059    6974 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:32.917064    6974 start.go:729] Will try again in 5 seconds ...
	I0723 07:47:37.919121    6974 start.go:360] acquireMachinesLock for newest-cni-498000: {Name:mkf6b1e7eff5e5304e450df4e51ed686c9ebd592 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 07:47:37.919811    6974 start.go:364] duration metric: took 515.375µs to acquireMachinesLock for "newest-cni-498000"
	I0723 07:47:37.919993    6974 start.go:96] Skipping create...Using existing machine configuration
	I0723 07:47:37.920014    6974 fix.go:54] fixHost starting: 
	I0723 07:47:37.920879    6974 fix.go:112] recreateIfNeeded on newest-cni-498000: state=Stopped err=<nil>
	W0723 07:47:37.920909    6974 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 07:47:37.924436    6974 out.go:177] * Restarting existing qemu2 VM for "newest-cni-498000" ...
	I0723 07:47:37.932251    6974 qemu.go:418] Using hvf for hardware acceleration
	I0723 07:47:37.932486    6974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:66:f9:69:df:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19319-1567/.minikube/machines/newest-cni-498000/disk.qcow2
	I0723 07:47:37.942345    6974 main.go:141] libmachine: STDOUT: 
	I0723 07:47:37.942413    6974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0723 07:47:37.942504    6974 fix.go:56] duration metric: took 22.49275ms for fixHost
	I0723 07:47:37.942523    6974 start.go:83] releasing machines lock for "newest-cni-498000", held for 22.658459ms
	W0723 07:47:37.942762    6974 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0723 07:47:37.950330    6974 out.go:177] 
	W0723 07:47:37.953366    6974 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0723 07:47:37.953395    6974 out.go:239] * 
	* 
	W0723 07:47:37.955987    6974 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 07:47:37.963247    6974 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-498000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000: exit status 7 (66.669167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-374000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (32.345541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-374000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-374000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-374000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.711958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-374000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-374000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (28.463333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-374000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (27.790416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-374000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-374000 --alsologtostderr -v=1: exit status 83 (40.780417ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-374000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-374000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:47:35.667616    6993 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:47:35.667779    6993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:35.667782    6993 out.go:304] Setting ErrFile to fd 2...
	I0723 07:47:35.667784    6993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:35.667922    6993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:47:35.668142    6993 out.go:298] Setting JSON to false
	I0723 07:47:35.668149    6993 mustload.go:65] Loading cluster: default-k8s-diff-port-374000
	I0723 07:47:35.668340    6993 config.go:182] Loaded profile config "default-k8s-diff-port-374000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:47:35.672918    6993 out.go:177] * The control-plane node default-k8s-diff-port-374000 host is not running: state=Stopped
	I0723 07:47:35.676930    6993 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-374000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-374000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (27.368958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (27.724709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-498000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000: exit status 7 (29.34775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-498000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-498000 --alsologtostderr -v=1: exit status 83 (41.165208ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-498000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:47:38.144414    7017 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:47:38.144558    7017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:38.144564    7017 out.go:304] Setting ErrFile to fd 2...
	I0723 07:47:38.144568    7017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:47:38.144683    7017 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:47:38.144910    7017 out.go:298] Setting JSON to false
	I0723 07:47:38.144917    7017 mustload.go:65] Loading cluster: newest-cni-498000
	I0723 07:47:38.145114    7017 config.go:182] Loaded profile config "newest-cni-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0723 07:47:38.149649    7017 out.go:177] * The control-plane node newest-cni-498000 host is not running: state=Stopped
	I0723 07:47:38.153602    7017 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-498000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-498000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000: exit status 7 (28.509833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-498000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000: exit status 7 (28.8655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-498000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 13.62
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.1
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 10.15
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.31
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 206.24
38 TestAddons/parallel/Registry 13.15
39 TestAddons/parallel/Ingress 18.09
40 TestAddons/parallel/InspektorGadget 10.22
41 TestAddons/parallel/MetricsServer 5.26
44 TestAddons/parallel/CSI 41.77
45 TestAddons/parallel/Headlamp 11.39
46 TestAddons/parallel/CloudSpanner 5.16
47 TestAddons/parallel/LocalPath 40.84
48 TestAddons/parallel/NvidiaDevicePlugin 5.14
49 TestAddons/parallel/Yakd 5
50 TestAddons/parallel/Volcano 39.81
53 TestAddons/serial/GCPAuth/Namespaces 0.07
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.55
65 TestErrorSpam/setup 33.39
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.25
68 TestErrorSpam/pause 0.65
69 TestErrorSpam/unpause 0.57
70 TestErrorSpam/stop 64.27
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 50.6
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 39.28
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
82 TestFunctional/serial/CacheCmd/cache/add_local 1.12
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.64
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 65.25
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.66
93 TestFunctional/serial/LogsFileCmd 0.66
94 TestFunctional/serial/InvalidService 4.12
96 TestFunctional/parallel/ConfigCmd 0.23
97 TestFunctional/parallel/DashboardCmd 8.44
98 TestFunctional/parallel/DryRun 0.32
99 TestFunctional/parallel/InternationalLanguage 0.12
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 26.43
108 TestFunctional/parallel/SSHCmd 0.13
109 TestFunctional/parallel/CpCmd 0.54
111 TestFunctional/parallel/FileSync 0.07
112 TestFunctional/parallel/CertSync 0.4
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
120 TestFunctional/parallel/License 0.22
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.26
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.77
128 TestFunctional/parallel/ImageCommands/Setup 1.66
129 TestFunctional/parallel/DockerEnv/bash 0.28
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 12.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.48
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.21
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.1
146 TestFunctional/parallel/ServiceCmd/List 0.09
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
149 TestFunctional/parallel/ServiceCmd/Format 0.1
150 TestFunctional/parallel/ServiceCmd/URL 0.1
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 4.06
161 TestFunctional/parallel/MountCmd/specific-port 0.81
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.07
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 193.88
170 TestMultiControlPlane/serial/DeployApp 4.23
171 TestMultiControlPlane/serial/PingHostFromPods 0.76
172 TestMultiControlPlane/serial/AddWorkerNode 56.22
173 TestMultiControlPlane/serial/NodeLabels 0.12
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
175 TestMultiControlPlane/serial/CopyFile 4.37
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 76.87
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 3.4
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
266 TestStoppedBinaryUpgrade/Setup 1.05
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
286 TestNoKubernetes/serial/ProfileList 0.1
287 TestNoKubernetes/serial/Stop 1.98
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
303 TestStartStop/group/old-k8s-version/serial/Stop 3.46
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/no-preload/serial/Stop 1.82
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
325 TestStartStop/group/embed-certs/serial/Stop 3.53
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.12
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
341 TestStartStop/group/newest-cni/serial/Stop 3.38
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-909000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-909000: exit status 85 (91.757ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-909000 | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT |          |
	|         | -p download-only-909000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 06:55:37
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 06:55:37.478215    2067 out.go:291] Setting OutFile to fd 1 ...
	I0723 06:55:37.478392    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 06:55:37.478395    2067 out.go:304] Setting ErrFile to fd 2...
	I0723 06:55:37.478397    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 06:55:37.478519    2067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	W0723 06:55:37.478590    2067 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19319-1567/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19319-1567/.minikube/config/config.json: no such file or directory
	I0723 06:55:37.479832    2067 out.go:298] Setting JSON to true
	I0723 06:55:37.496940    2067 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1501,"bootTime":1721741436,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 06:55:37.497003    2067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 06:55:37.501768    2067 out.go:97] [download-only-909000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 06:55:37.501943    2067 notify.go:220] Checking for updates...
	W0723 06:55:37.501999    2067 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball: no such file or directory
	I0723 06:55:37.504704    2067 out.go:169] MINIKUBE_LOCATION=19319
	I0723 06:55:37.507805    2067 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 06:55:37.512726    2067 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 06:55:37.515753    2067 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 06:55:37.518751    2067 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	W0723 06:55:37.524703    2067 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 06:55:37.524896    2067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 06:55:37.528725    2067 out.go:97] Using the qemu2 driver based on user configuration
	I0723 06:55:37.528744    2067 start.go:297] selected driver: qemu2
	I0723 06:55:37.528764    2067 start.go:901] validating driver "qemu2" against <nil>
	I0723 06:55:37.528839    2067 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 06:55:37.531720    2067 out.go:169] Automatically selected the socket_vmnet network
	I0723 06:55:37.537579    2067 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0723 06:55:37.537678    2067 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 06:55:37.537704    2067 cni.go:84] Creating CNI manager for ""
	I0723 06:55:37.537720    2067 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0723 06:55:37.537762    2067 start.go:340] cluster config:
	{Name:download-only-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-909000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 06:55:37.543075    2067 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 06:55:37.546877    2067 out.go:97] Downloading VM boot image ...
	I0723 06:55:37.546902    2067 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0723 06:55:46.569430    2067 out.go:97] Starting "download-only-909000" primary control-plane node in "download-only-909000" cluster
	I0723 06:55:46.569457    2067 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 06:55:46.639595    2067 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0723 06:55:46.639613    2067 cache.go:56] Caching tarball of preloaded images
	I0723 06:55:46.639795    2067 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 06:55:46.647912    2067 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0723 06:55:46.647920    2067 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:55:46.724317    2067 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0723 06:55:53.803533    2067 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:55:53.803707    2067 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:55:54.499049    2067 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0723 06:55:54.499243    2067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/download-only-909000/config.json ...
	I0723 06:55:54.499261    2067 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/download-only-909000/config.json: {Name:mkc0920811cfb85cd807206e046ab53156a5fad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 06:55:54.499496    2067 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0723 06:55:54.499690    2067 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0723 06:55:54.940396    2067 out.go:169] 
	W0723 06:55:54.945455    2067 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19319-1567/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60 0x104865a60] Decompressors:map[bz2:0x14000523240 gz:0x14000523248 tar:0x140005231f0 tar.bz2:0x14000523200 tar.gz:0x14000523210 tar.xz:0x14000523220 tar.zst:0x14000523230 tbz2:0x14000523200 tgz:0x14000523210 txz:0x14000523220 tzst:0x14000523230 xz:0x14000523250 zip:0x14000523260 zst:0x14000523258] Getters:map[file:0x14000b0e910 http:0x140009001e0 https:0x14000900230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0723 06:55:54.945477    2067 out_reason.go:110] 
	W0723 06:55:54.954272    2067 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 06:55:54.958418    2067 out.go:169] 
	
	
	* The control-plane node download-only-909000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-909000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-909000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (13.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-378000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-378000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (13.623461333s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (13.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-378000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-378000: exit status 85 (78.776791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-909000 | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT |                     |
	|         | -p download-only-909000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT | 23 Jul 24 06:55 PDT |
	| delete  | -p download-only-909000        | download-only-909000 | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT | 23 Jul 24 06:55 PDT |
	| start   | -o=json --download-only        | download-only-378000 | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT |                     |
	|         | -p download-only-378000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 06:55:55
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 06:55:55.360722    2103 out.go:291] Setting OutFile to fd 1 ...
	I0723 06:55:55.360854    2103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 06:55:55.360857    2103 out.go:304] Setting ErrFile to fd 2...
	I0723 06:55:55.360859    2103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 06:55:55.360991    2103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 06:55:55.362009    2103 out.go:298] Setting JSON to true
	I0723 06:55:55.378238    2103 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1519,"bootTime":1721741436,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 06:55:55.378307    2103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 06:55:55.382657    2103 out.go:97] [download-only-378000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 06:55:55.382782    2103 notify.go:220] Checking for updates...
	I0723 06:55:55.386462    2103 out.go:169] MINIKUBE_LOCATION=19319
	I0723 06:55:55.389629    2103 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 06:55:55.393661    2103 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 06:55:55.395059    2103 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 06:55:55.398681    2103 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	W0723 06:55:55.404596    2103 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 06:55:55.404750    2103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 06:55:55.407671    2103 out.go:97] Using the qemu2 driver based on user configuration
	I0723 06:55:55.407680    2103 start.go:297] selected driver: qemu2
	I0723 06:55:55.407685    2103 start.go:901] validating driver "qemu2" against <nil>
	I0723 06:55:55.407750    2103 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 06:55:55.410651    2103 out.go:169] Automatically selected the socket_vmnet network
	I0723 06:55:55.415779    2103 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0723 06:55:55.415862    2103 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 06:55:55.415878    2103 cni.go:84] Creating CNI manager for ""
	I0723 06:55:55.415886    2103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 06:55:55.415891    2103 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 06:55:55.415930    2103 start.go:340] cluster config:
	{Name:download-only-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-378000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 06:55:55.419430    2103 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 06:55:55.422572    2103 out.go:97] Starting "download-only-378000" primary control-plane node in "download-only-378000" cluster
	I0723 06:55:55.422579    2103 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 06:55:55.476846    2103 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0723 06:55:55.476866    2103 cache.go:56] Caching tarball of preloaded images
	I0723 06:55:55.477058    2103 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0723 06:55:55.481297    2103 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0723 06:55:55.481304    2103 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:55:55.559410    2103 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-378000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-378000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-378000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (10.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-926000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-926000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (10.152167416s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (10.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-926000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-926000: exit status 85 (79.187917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-909000 | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT |                     |
	|         | -p download-only-909000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT | 23 Jul 24 06:55 PDT |
	| delete  | -p download-only-909000             | download-only-909000 | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT | 23 Jul 24 06:55 PDT |
	| start   | -o=json --download-only             | download-only-378000 | jenkins | v1.33.1 | 23 Jul 24 06:55 PDT |                     |
	|         | -p download-only-378000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 23 Jul 24 06:56 PDT | 23 Jul 24 06:56 PDT |
	| delete  | -p download-only-378000             | download-only-378000 | jenkins | v1.33.1 | 23 Jul 24 06:56 PDT | 23 Jul 24 06:56 PDT |
	| start   | -o=json --download-only             | download-only-926000 | jenkins | v1.33.1 | 23 Jul 24 06:56 PDT |                     |
	|         | -p download-only-926000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 06:56:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 06:56:09.265226    2125 out.go:291] Setting OutFile to fd 1 ...
	I0723 06:56:09.265353    2125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 06:56:09.265355    2125 out.go:304] Setting ErrFile to fd 2...
	I0723 06:56:09.265358    2125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 06:56:09.265496    2125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 06:56:09.266630    2125 out.go:298] Setting JSON to true
	I0723 06:56:09.282703    2125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1533,"bootTime":1721741436,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 06:56:09.282766    2125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 06:56:09.287249    2125 out.go:97] [download-only-926000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 06:56:09.287343    2125 notify.go:220] Checking for updates...
	I0723 06:56:09.291059    2125 out.go:169] MINIKUBE_LOCATION=19319
	I0723 06:56:09.295226    2125 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 06:56:09.299232    2125 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 06:56:09.302236    2125 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 06:56:09.305427    2125 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	W0723 06:56:09.309627    2125 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 06:56:09.309823    2125 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 06:56:09.313170    2125 out.go:97] Using the qemu2 driver based on user configuration
	I0723 06:56:09.313183    2125 start.go:297] selected driver: qemu2
	I0723 06:56:09.313187    2125 start.go:901] validating driver "qemu2" against <nil>
	I0723 06:56:09.313247    2125 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 06:56:09.316283    2125 out.go:169] Automatically selected the socket_vmnet network
	I0723 06:56:09.321367    2125 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0723 06:56:09.321555    2125 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 06:56:09.321574    2125 cni.go:84] Creating CNI manager for ""
	I0723 06:56:09.321581    2125 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0723 06:56:09.321593    2125 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 06:56:09.321630    2125 start.go:340] cluster config:
	{Name:download-only-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 06:56:09.325122    2125 iso.go:125] acquiring lock: {Name:mkebc30c62a229e9e211009b37d5c757e67c1626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 06:56:09.328188    2125 out.go:97] Starting "download-only-926000" primary control-plane node in "download-only-926000" cluster
	I0723 06:56:09.328200    2125 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0723 06:56:09.385272    2125 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0723 06:56:09.385291    2125 cache.go:56] Caching tarball of preloaded images
	I0723 06:56:09.385483    2125 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0723 06:56:09.389680    2125 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0723 06:56:09.389688    2125 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:56:09.467492    2125 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0723 06:56:16.908346    2125 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:56:16.909389    2125 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0723 06:56:17.429254    2125 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0723 06:56:17.429454    2125 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/download-only-926000/config.json ...
	I0723 06:56:17.429473    2125 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/download-only-926000/config.json: {Name:mk6cfc4dace4efec172bfbc19b63fdf0a37f84a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 06:56:17.429702    2125 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0723 06:56:17.429824    2125 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19319-1567/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-926000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-926000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-926000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-833000 --alsologtostderr --binary-mirror http://127.0.0.1:49326 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-833000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-833000
--- PASS: TestBinaryMirror (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-861000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-861000: exit status 85 (53.718959ms)

                                                
                                                
-- stdout --
	* Profile "addons-861000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-861000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-861000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-861000: exit status 85 (57.5865ms)

                                                
                                                
-- stdout --
	* Profile "addons-861000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-861000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (206.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-861000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-861000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m26.23911525s)
--- PASS: TestAddons/Setup (206.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 7.410833ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-2hj85" [91f1486b-5927-4346-98db-f0fbfdb649af] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004175416s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xmcsx" [953663fb-d982-4b02-b8cc-f64934353398] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004098417s
addons_test.go:342: (dbg) Run:  kubectl --context addons-861000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-861000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-861000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.83569625s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 ip
2024/07/23 06:59:59 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.15s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-861000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-861000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-861000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cb8f8896-9965-48c9-b4a6-c8d05900878b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cb8f8896-9965-48c9-b4a6-c8d05900878b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003447833s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-861000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-861000 addons disable ingress --alsologtostderr -v=1: (7.197206708s)
--- PASS: TestAddons/parallel/Ingress (18.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tqsl4" [e67c7b3e-dfaa-4080-abee-6d1244a24466] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004125875s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-861000
addons_test.go:843: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-861000: (5.216695917s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.295333ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-bx2wl" [be758469-bb89-469f-92a7-745a0e7307d4] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004626625s
addons_test.go:417: (dbg) Run:  kubectl --context addons-861000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 3.505958ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-861000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-861000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6b0e51a6-7681-49d9-906b-9ed25c6ec427] Pending
helpers_test.go:344: "task-pv-pod" [6b0e51a6-7681-49d9-906b-9ed25c6ec427] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6b0e51a6-7681-49d9-906b-9ed25c6ec427] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002128125s
addons_test.go:586: (dbg) Run:  kubectl --context addons-861000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-861000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-861000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-861000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-861000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-861000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-861000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [369ec47f-7351-4837-ad0e-984fed6b4af1] Pending
helpers_test.go:344: "task-pv-pod-restore" [369ec47f-7351-4837-ad0e-984fed6b4af1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [369ec47f-7351-4837-ad0e-984fed6b4af1] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004309292s
addons_test.go:628: (dbg) Run:  kubectl --context addons-861000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-861000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-861000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-arm64 -p addons-861000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.069438166s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-861000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-tdbwc" [8b6e8d08-8f32-4922-a9f7-a0c8baa046d6] Pending
helpers_test.go:344: "headlamp-7867546754-tdbwc" [8b6e8d08-8f32-4922-a9f7-a0c8baa046d6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-tdbwc" [8b6e8d08-8f32-4922-a9f7-a0c8baa046d6] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.0049245s
--- PASS: TestAddons/parallel/Headlamp (11.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-2mx9w" [2b4d0b93-2d97-41f7-87a8-f10cf239f461] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003662333s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-861000
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-861000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-861000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-861000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fb2ee6b1-98bb-4049-bc7a-401265400736] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fb2ee6b1-98bb-4049-bc7a-401265400736] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fb2ee6b1-98bb-4049-bc7a-401265400736] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005263917s
addons_test.go:992: (dbg) Run:  kubectl --context addons-861000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 ssh "cat /opt/local-path-provisioner/pvc-44c0ba1b-72e0-46af-96ad-f31b2e9936af_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-861000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-861000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-arm64 -p addons-861000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.348810458s)
--- PASS: TestAddons/parallel/LocalPath (40.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2xb85" [ca38e4c3-134f-4e2e-a73b-7208a1f54425] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004393209s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-861000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-bj6gg" [64edcd90-1fc9-46c1-af5d-15694c19c198] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004121583s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (39.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 1.770875ms
addons_test.go:889: volcano-scheduler stabilized in 1.892666ms
addons_test.go:897: volcano-admission stabilized in 2.128208ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-m5p7t" [bf93c759-e3ef-44ae-921f-5cf3f29ca5f1] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003786667s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-8klcx" [189f3984-1399-4767-962b-30bbe599b74e] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.003971834s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-fh2rn" [836331e9-9ba0-4b58-939d-d933b9120674] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003980042s
addons_test.go:924: (dbg) Run:  kubectl --context addons-861000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-861000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-861000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9be9adab-604c-4ba8-a71a-4c73f4617500] Pending
helpers_test.go:344: "test-job-nginx-0" [9be9adab-604c-4ba8-a71a-4c73f4617500] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [9be9adab-604c-4ba8-a71a-4c73f4617500] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 15.003689834s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-arm64 -p addons-861000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-arm64 -p addons-861000 addons disable volcano --alsologtostderr -v=1: (9.614570584s)
--- PASS: TestAddons/parallel/Volcano (39.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-861000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-861000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-861000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-861000: (12.198150959s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-861000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-861000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-861000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.55s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.55s)

                                                
                                    
x
+
TestErrorSpam/setup (33.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-809000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-809000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 --driver=qemu2 : (33.385484916s)
--- PASS: TestErrorSpam/setup (33.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 unpause
--- PASS: TestErrorSpam/unpause (0.57s)

                                                
                                    
x
+
TestErrorSpam/stop (64.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 stop: (12.201468542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 stop: (26.031797792s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-809000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-809000 stop: (26.035303875s)
--- PASS: TestErrorSpam/stop (64.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19319-1567/.minikube/files/etc/test/nested/copy/2065/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-693000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (50.597414667s)
--- PASS: TestFunctional/serial/StartWithProxy (50.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --alsologtostderr -v=8
E0723 07:04:46.588024    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:46.594898    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:46.606949    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:46.629015    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:46.671082    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:46.753137    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:46.915209    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:47.237266    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:47.879416    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:49.160342    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:51.722419    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:04:56.844499    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-693000 --alsologtostderr -v=8: (39.275229583s)
functional_test.go:659: soft start took 39.275613416s for "functional-693000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-693000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local887641261/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache add minikube-local-cache-test:functional-693000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache delete minikube-local-cache-test:functional-693000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-693000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.255583ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cache reload
E0723 07:05:07.084875    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 kubectl -- --context functional-693000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-693000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (65.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0723 07:05:27.565227    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:06:08.526687    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-693000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m5.249778417s)
functional_test.go:757: restart took 1m5.249892167s for "functional-693000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (65.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-693000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2543083643/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-693000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-693000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-693000: exit status 115 (101.507667ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32739 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-693000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 config get cpus: exit status 14 (29.900333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 config get cpus: exit status 14 (29.41425ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-693000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-693000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3286: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-693000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (161.060792ms)

                                                
                                                
-- stdout --
	* [functional-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:07:06.795871    3252 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:07:06.799496    3252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:07:06.799501    3252 out.go:304] Setting ErrFile to fd 2...
	I0723 07:07:06.799504    3252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:07:06.799627    3252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:07:06.803704    3252 out.go:298] Setting JSON to false
	I0723 07:07:06.822485    3252 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2190,"bootTime":1721741436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:07:06.822554    3252 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:07:06.829466    3252 out.go:177] * [functional-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0723 07:07:06.838412    3252 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:07:06.838496    3252 notify.go:220] Checking for updates...
	I0723 07:07:06.844509    3252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:07:06.852416    3252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:07:06.860412    3252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:07:06.867530    3252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:07:06.874500    3252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:07:06.878799    3252 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:07:06.879070    3252 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:07:06.883510    3252 out.go:177] * Using the qemu2 driver based on existing profile
	I0723 07:07:06.890361    3252 start.go:297] selected driver: qemu2
	I0723 07:07:06.890368    3252 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:07:06.890445    3252 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:07:06.902467    3252 out.go:177] 
	W0723 07:07:06.910524    3252 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0723 07:07:06.921450    3252 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-693000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-693000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.793625ms)

                                                
                                                
-- stdout --
	* [functional-693000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 07:07:07.106276    3263 out.go:291] Setting OutFile to fd 1 ...
	I0723 07:07:07.106407    3263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:07:07.106410    3263 out.go:304] Setting ErrFile to fd 2...
	I0723 07:07:07.106413    3263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 07:07:07.106557    3263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
	I0723 07:07:07.108326    3263 out.go:298] Setting JSON to false
	I0723 07:07:07.130506    3263 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2191,"bootTime":1721741436,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0723 07:07:07.130680    3263 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0723 07:07:07.134559    3263 out.go:177] * [functional-693000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0723 07:07:07.138618    3263 notify.go:220] Checking for updates...
	I0723 07:07:07.142785    3263 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 07:07:07.145477    3263 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	I0723 07:07:07.148483    3263 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0723 07:07:07.151455    3263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 07:07:07.156453    3263 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	I0723 07:07:07.159482    3263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 07:07:07.162771    3263 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0723 07:07:07.163037    3263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 07:07:07.167479    3263 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0723 07:07:07.174473    3263 start.go:297] selected driver: qemu2
	I0723 07:07:07.174479    3263 start.go:901] validating driver "qemu2" against &{Name:functional-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 07:07:07.174539    3263 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 07:07:07.180425    3263 out.go:177] 
	W0723 07:07:07.184498    3263 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0723 07:07:07.188451    3263 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9f9ed311-30be-40e7-a66a-2171b56d51f7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003336459s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-693000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-693000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-693000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-693000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [34ad0528-80e4-44a2-802b-444472ec4394] Pending
helpers_test.go:344: "sp-pod" [34ad0528-80e4-44a2-802b-444472ec4394] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [34ad0528-80e4-44a2-802b-444472ec4394] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003687625s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-693000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-693000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-693000 delete -f testdata/storage-provisioner/pod.yaml: (1.025629875s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-693000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4cb3d115-1363-4a6f-8cdf-08b1b928f31c] Pending
helpers_test.go:344: "sp-pod" [4cb3d115-1363-4a6f-8cdf-08b1b928f31c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4cb3d115-1363-4a6f-8cdf-08b1b928f31c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003551875s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-693000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cp functional-693000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd349912931/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh -n functional-693000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2065/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/test/nested/copy/2065/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2065.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/2065.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2065.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /usr/share/ca-certificates/2065.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/20652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/20652.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/20652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /usr/share/ca-certificates/20652.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-693000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo systemctl is-active crio": exit status 1 (87.765083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-693000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-693000
docker.io/kicbase/echo-server:functional-693000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image ls --format short --alsologtostderr:
I0723 07:07:07.775145    3283 out.go:291] Setting OutFile to fd 1 ...
I0723 07:07:07.775322    3283 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:07.775329    3283 out.go:304] Setting ErrFile to fd 2...
I0723 07:07:07.775331    3283 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:07.775477    3283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
I0723 07:07:07.775916    3283 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:07.775980    3283 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:07.776787    3283 ssh_runner.go:195] Run: systemctl --version
I0723 07:07:07.776795    3283 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
I0723 07:07:07.805167    3283 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-693000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-693000 | ef3467a9d3e00 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 443d199e8bfcc | 193MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | alpine            | 5461b18aaccf3 | 44.8MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| docker.io/kicbase/echo-server               | functional-693000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/localhost/my-image                | functional-693000 | 0b869d5fecb45 | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image ls --format table --alsologtostderr:
I0723 07:07:09.772472    3296 out.go:291] Setting OutFile to fd 1 ...
I0723 07:07:09.772636    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:09.772643    3296 out.go:304] Setting ErrFile to fd 2...
I0723 07:07:09.772645    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:09.772773    3296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
I0723 07:07:09.773268    3296 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:09.773329    3296 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:09.774130    3296 ssh_runner.go:195] Run: systemctl --version
I0723 07:07:09.774140    3296 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
I0723 07:07:09.801722    3296 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/07/23 07:07:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-693000 image ls --format json --alsologtostderr:
[{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c472
4a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"0b869d5fecb4558c297d79d7b885b11f6f15f0961a3954fdfaa6b3f42f72ab06","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-693000"],"size":"1410000"},{"id":"ef3467a9d3e00c6f0f6b0d32721e96ee948bcc0069ccf249b2b791b62b9c37bc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-693000"],"size":"30"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.i
o/echoserver-arm:1.8"],"size":"85000000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-693000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image ls --format json --alsologtostderr:
I0723 07:07:09.695796    3294 out.go:291] Setting OutFile to fd 1 ...
I0723 07:07:09.695965    3294 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:09.695968    3294 out.go:304] Setting ErrFile to fd 2...
I0723 07:07:09.695971    3294 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:09.696109    3294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
I0723 07:07:09.696551    3294 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:09.696624    3294 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:09.697493    3294 ssh_runner.go:195] Run: systemctl --version
I0723 07:07:09.697502    3294 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
I0723 07:07:09.730649    3294 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-693000 image ls --format yaml --alsologtostderr:
- id: 0b869d5fecb4558c297d79d7b885b11f6f15f0961a3954fdfaa6b3f42f72ab06
repoDigests: []
repoTags:
- docker.io/localhost/my-image:functional-693000
size: "1410000"
- id: ef3467a9d3e00c6f0f6b0d32721e96ee948bcc0069ccf249b2b791b62b9c37bc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-693000
size: "30"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-693000
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image ls --format yaml --alsologtostderr:
I0723 07:07:09.625456    3292 out.go:291] Setting OutFile to fd 1 ...
I0723 07:07:09.625618    3292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:09.625621    3292 out.go:304] Setting ErrFile to fd 2...
I0723 07:07:09.625623    3292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:09.625749    3292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
I0723 07:07:09.626160    3292 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:09.626219    3292 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:09.627067    3292 ssh_runner.go:195] Run: systemctl --version
I0723 07:07:09.627076    3292 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
I0723 07:07:09.654713    3292 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh pgrep buildkitd: exit status 1 (63.413167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image build -t localhost/my-image:functional-693000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-693000 image build -t localhost/my-image:functional-693000 testdata/build --alsologtostderr: (1.639355125s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-693000 image build -t localhost/my-image:functional-693000 testdata/build --alsologtostderr:
I0723 07:07:07.915404    3288 out.go:291] Setting OutFile to fd 1 ...
I0723 07:07:07.915633    3288 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:07.915636    3288 out.go:304] Setting ErrFile to fd 2...
I0723 07:07:07.915639    3288 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 07:07:07.915762    3288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19319-1567/.minikube/bin
I0723 07:07:07.916195    3288 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:07.916975    3288 config.go:182] Loaded profile config "functional-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0723 07:07:07.917860    3288 ssh_runner.go:195] Run: systemctl --version
I0723 07:07:07.917874    3288 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19319-1567/.minikube/machines/functional-693000/id_rsa Username:docker}
I0723 07:07:07.945930    3288 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3754722104.tar
I0723 07:07:07.945986    3288 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0723 07:07:07.950021    3288 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3754722104.tar
I0723 07:07:07.951532    3288 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3754722104.tar: stat -c "%s %y" /var/lib/minikube/build/build.3754722104.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3754722104.tar': No such file or directory
I0723 07:07:07.951543    3288 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3754722104.tar --> /var/lib/minikube/build/build.3754722104.tar (3072 bytes)
I0723 07:07:07.961641    3288 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3754722104
I0723 07:07:07.965184    3288 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3754722104 -xf /var/lib/minikube/build/build.3754722104.tar
I0723 07:07:07.968757    3288 docker.go:360] Building image: /var/lib/minikube/build/build.3754722104
I0723 07:07:07.968813    3288 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-693000 /var/lib/minikube/build/build.3754722104
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0b869d5fecb4558c297d79d7b885b11f6f15f0961a3954fdfaa6b3f42f72ab06 done
#8 naming to localhost/my-image:functional-693000 done
#8 DONE 0.0s
I0723 07:07:09.512011    3288 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-693000 /var/lib/minikube/build/build.3754722104: (1.543212875s)
I0723 07:07:09.512076    3288 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3754722104
I0723 07:07:09.515940    3288 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3754722104.tar
I0723 07:07:09.519204    3288 build_images.go:217] Built localhost/my-image:functional-693000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3754722104.tar
I0723 07:07:09.519219    3288 build_images.go:133] succeeded building to: functional-693000
I0723 07:07:09.519222    3288 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.6405715s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-693000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-693000 docker-env) && out/minikube-darwin-arm64 status -p functional-693000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-693000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-693000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-693000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-r5f7z" [f8089f8d-d588-4543-acc9-c50832cf5440] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-r5f7z" [f8089f8d-d588-4543-acc9-c50832cf5440] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003957417s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image load --daemon docker.io/kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image load --daemon docker.io/kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-693000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image load --daemon docker.io/kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image save docker.io/kicbase/echo-server:functional-693000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image rm docker.io/kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-693000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 image save --daemon docker.io/kicbase/echo-server:functional-693000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-693000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3101: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-693000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [13bf139b-17c7-4204-8e6f-5faa231a5559] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [13bf139b-17c7-4204-8e6f-5faa231a5559] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.0039565s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service list -o json
functional_test.go:1490: Took "86.415125ms" to run "out/minikube-darwin-arm64 -p functional-693000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30614
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30614
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-693000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.188.89 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-693000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "88.463709ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.765666ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "85.526166ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.68175ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1340042113/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721743620821254000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1340042113/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721743620821254000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1340042113/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721743620821254000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1340042113/001/test-1721743620821254000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.259083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 23 14:07 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 23 14:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 23 14:07 test-1721743620821254000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh cat /mount-9p/test-1721743620821254000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-693000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1d2d4e65-c52e-46a3-b61b-0c16460d1668] Pending
helpers_test.go:344: "busybox-mount" [1d2d4e65-c52e-46a3-b61b-0c16460d1668] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1d2d4e65-c52e-46a3-b61b-0c16460d1668] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1d2d4e65-c52e-46a3-b61b-0c16460d1668] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.00414775s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-693000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1340042113/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3659186591/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.472125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3659186591/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "sudo umount -f /mount-9p": exit status 1 (61.633333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-693000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3659186591/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1: exit status 1 (75.953459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-693000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-693000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-693000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2886696848/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.07s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-693000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-693000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-693000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-023000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0723 07:07:30.447436    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:09:46.582825    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
E0723 07:10:14.286844    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-023000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m13.680407667s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (193.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-023000 -- rollout status deployment/busybox: (2.752812458s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-58mbb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-tfqsl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-wzhtr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-58mbb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-tfqsl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-wzhtr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-58mbb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-tfqsl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-wzhtr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-58mbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-58mbb -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-tfqsl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-tfqsl -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-wzhtr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-023000 -- exec busybox-fc5497c4f-wzhtr -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-023000 -v=7 --alsologtostderr
E0723 07:11:21.293443    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:21.299774    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:21.311237    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:21.333341    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:21.375438    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:21.457534    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:21.619315    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:21.941447    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:22.583612    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:23.864284    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
E0723 07:11:26.426107    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-023000 -v=7 --alsologtostderr: (55.99916825s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-023000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp testdata/cp-test.txt ha-023000:/home/docker/cp-test.txt
E0723 07:11:31.548381    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/functional-693000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1878017500/001/cp-test_ha-023000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000:/home/docker/cp-test.txt ha-023000-m02:/home/docker/cp-test_ha-023000_ha-023000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m02 "sudo cat /home/docker/cp-test_ha-023000_ha-023000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000:/home/docker/cp-test.txt ha-023000-m03:/home/docker/cp-test_ha-023000_ha-023000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m03 "sudo cat /home/docker/cp-test_ha-023000_ha-023000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000:/home/docker/cp-test.txt ha-023000-m04:/home/docker/cp-test_ha-023000_ha-023000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m04 "sudo cat /home/docker/cp-test_ha-023000_ha-023000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp testdata/cp-test.txt ha-023000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1878017500/001/cp-test_ha-023000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m02:/home/docker/cp-test.txt ha-023000:/home/docker/cp-test_ha-023000-m02_ha-023000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000 "sudo cat /home/docker/cp-test_ha-023000-m02_ha-023000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m02:/home/docker/cp-test.txt ha-023000-m03:/home/docker/cp-test_ha-023000-m02_ha-023000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m03 "sudo cat /home/docker/cp-test_ha-023000-m02_ha-023000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m02:/home/docker/cp-test.txt ha-023000-m04:/home/docker/cp-test_ha-023000-m02_ha-023000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m04 "sudo cat /home/docker/cp-test_ha-023000-m02_ha-023000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp testdata/cp-test.txt ha-023000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1878017500/001/cp-test_ha-023000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m03:/home/docker/cp-test.txt ha-023000:/home/docker/cp-test_ha-023000-m03_ha-023000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000 "sudo cat /home/docker/cp-test_ha-023000-m03_ha-023000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m03:/home/docker/cp-test.txt ha-023000-m02:/home/docker/cp-test_ha-023000-m03_ha-023000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m02 "sudo cat /home/docker/cp-test_ha-023000-m03_ha-023000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m03:/home/docker/cp-test.txt ha-023000-m04:/home/docker/cp-test_ha-023000-m03_ha-023000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m04 "sudo cat /home/docker/cp-test_ha-023000-m03_ha-023000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp testdata/cp-test.txt ha-023000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1878017500/001/cp-test_ha-023000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m04:/home/docker/cp-test.txt ha-023000:/home/docker/cp-test_ha-023000-m04_ha-023000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000 "sudo cat /home/docker/cp-test_ha-023000-m04_ha-023000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m04:/home/docker/cp-test.txt ha-023000-m02:/home/docker/cp-test_ha-023000-m04_ha-023000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m02 "sudo cat /home/docker/cp-test_ha-023000-m04_ha-023000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 cp ha-023000-m04:/home/docker/cp-test.txt ha-023000-m03:/home/docker/cp-test_ha-023000-m04_ha-023000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-023000 ssh -n ha-023000-m03 "sudo cat /home/docker/cp-test_ha-023000-m04_ha-023000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (76.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0723 07:21:09.630005    2065 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19319-1567/.minikube/profiles/addons-861000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.870808083s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (76.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-320000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-320000 --output=json --user=testUser: (3.398784708s)
--- PASS: TestJSONOutput/stop/Command (3.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-333000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-333000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.876ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a37e707c-f74c-4035-b8e4-74ec08df396e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-333000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b034dcc-7079-4aa7-8e88-0cea0c66ed22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19319"}}
	{"specversion":"1.0","id":"25ebbf7b-496f-496e-952f-9fadb9d1e08c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig"}}
	{"specversion":"1.0","id":"425253d3-3b7c-4e31-aabe-6910134ddaf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2ab41e7b-aebd-4287-a49d-31bea6420101","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"12ef4cbe-f5d8-459b-9b9c-bdc24317cd21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube"}}
	{"specversion":"1.0","id":"a92012da-f7be-49c3-a273-94c1e55c63a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd523ffa-f58f-473a-9cb4-61f325ab7b8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-333000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-333000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-462000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-361000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-361000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.883166ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19319-1567/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19319-1567/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-361000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-361000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.876292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-361000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-361000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-361000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-361000: (1.981079s)
--- PASS: TestNoKubernetes/serial/Stop (1.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-361000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-361000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.590416ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-361000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-361000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-665000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-665000 --alsologtostderr -v=3: (3.463604583s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-665000 -n old-k8s-version-665000: exit status 7 (57.241875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-665000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-918000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-918000 --alsologtostderr -v=3: (1.814877292s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (57.805125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-918000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-482000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-482000 --alsologtostderr -v=3: (3.530071083s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-482000 -n embed-certs-482000: exit status 7 (31.397625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-482000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-374000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-374000 --alsologtostderr -v=3: (3.123577833s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-498000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-498000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-498000 --alsologtostderr -v=3: (3.37796075s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-374000 -n default-k8s-diff-port-374000: exit status 7 (51.372375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-374000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-498000 -n newest-cni-498000: exit status 7 (56.216334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-498000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-703000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-703000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-703000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-703000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703000"

                                                
                                                
----------------------- debugLogs end: cilium-703000 [took: 2.147437416s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-703000
--- SKIP: TestNetworkPlugins/group/cilium (2.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-416000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-416000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard