Test Report: QEMU_macOS 19345

                    
                      418bbe9cf4ce8ef71c806703730b1f6a2265d8b5:2024-07-29:35554
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.85
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.92
55 TestCertOptions 10.07
56 TestCertExpiration 195.12
57 TestDockerFlags 10.12
58 TestForceSystemdFlag 10.02
59 TestForceSystemdEnv 10.84
104 TestFunctional/parallel/ServiceCmdConnect 38.91
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 126.99
178 TestMultiControlPlane/serial/RestartSecondaryNode 184.06
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.38
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.04
183 TestMultiControlPlane/serial/StopCluster 202.09
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 9.98
193 TestJSONOutput/start/Command 9.81
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.01
225 TestMountStart/serial/StartWithMountFirst 9.85
228 TestMultiNode/serial/FreshStart2Nodes 9.84
229 TestMultiNode/serial/DeployApp2Nodes 76.95
230 TestMultiNode/serial/PingHostFrom2Pods 0.09
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.07
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.14
236 TestMultiNode/serial/StartAfterStop 42.65
237 TestMultiNode/serial/RestartKeepsNodes 8.8
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 2.13
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20.11
245 TestPreload 10.17
247 TestScheduledStopUnix 9.98
248 TestSkaffold 13.48
251 TestRunningBinaryUpgrade 627.23
253 TestKubernetesUpgrade 17.32
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.76
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.33
269 TestStoppedBinaryUpgrade/Upgrade 570.35
271 TestPause/serial/Start 9.88
281 TestNoKubernetes/serial/StartWithK8s 9.9
282 TestNoKubernetes/serial/StartWithStopK8s 5.3
283 TestNoKubernetes/serial/Start 5.3
287 TestNoKubernetes/serial/StartNoArgs 5.31
289 TestNetworkPlugins/group/auto/Start 9.85
290 TestNetworkPlugins/group/kindnet/Start 9.98
291 TestNetworkPlugins/group/calico/Start 9.9
292 TestNetworkPlugins/group/custom-flannel/Start 9.74
293 TestNetworkPlugins/group/false/Start 9.88
294 TestNetworkPlugins/group/enable-default-cni/Start 9.99
296 TestNetworkPlugins/group/flannel/Start 9.86
297 TestNetworkPlugins/group/bridge/Start 9.79
298 TestNetworkPlugins/group/kubenet/Start 11.8
300 TestStartStop/group/old-k8s-version/serial/FirstStart 11.9
302 TestStartStop/group/no-preload/serial/FirstStart 9.95
303 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
307 TestStartStop/group/old-k8s-version/serial/SecondStart 6.23
308 TestStartStop/group/no-preload/serial/DeployApp 0.09
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/no-preload/serial/SecondStart 5.75
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/old-k8s-version/serial/Pause 0.1
318 TestStartStop/group/embed-certs/serial/FirstStart 9.94
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
322 TestStartStop/group/no-preload/serial/Pause 0.1
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.05
325 TestStartStop/group/embed-certs/serial/DeployApp 0.09
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
332 TestStartStop/group/embed-certs/serial/SecondStart 5.26
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
338 TestStartStop/group/embed-certs/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/FirstStart 9.91
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
349 TestStartStop/group/newest-cni/serial/SecondStart 5.25
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (20.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-221000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-221000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (20.849329875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dfe023c2-b010-4434-87c3-bd0140fe2183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-221000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba99791d-d2eb-4ca1-a76c-4050422afa12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19345"}}
	{"specversion":"1.0","id":"50a296a5-5a0a-4acc-b4a3-4b9b783ecd1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig"}}
	{"specversion":"1.0","id":"dfc535e8-d1eb-41f9-a39e-26c924653187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2fbdcc0e-f41b-4e85-9202-e9a43e74fcd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7342e901-3411-46a4-8bca-cf727a9ddead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube"}}
	{"specversion":"1.0","id":"e676f059-8119-4203-9a07-cb159f23a5d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"1d14be24-e37d-4e8a-b24a-9cfa4161a919","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b4e229e-3987-48e8-b482-656be8c9030b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"064ccc31-835f-4849-a47a-ec318f47a8eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"10ee2cfe-5321-410b-96b8-fe733aa57366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-221000\" primary control-plane node in \"download-only-221000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5b49aec-a184-4381-b889-5234b5b140bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a908f6e-2b79-4f78-802c-2014e63a08e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60] Decompressors:map[bz2:0x1400068c0f0 gz:0x1400068c0f8 tar:0x1400068c0a0 tar.bz2:0x1400068c0b0 tar.gz:0x1400068c0c0 tar.xz:0x1400068c0d0 tar.zst:0x1400068c0e0 tbz2:0x1400068c0b0 tgz:0x14
00068c0c0 txz:0x1400068c0d0 tzst:0x1400068c0e0 xz:0x1400068c100 zip:0x1400068c110 zst:0x1400068c108] Getters:map[file:0x140002ba730 http:0x14000bc4280 https:0x14000bc42d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"1e8130f7-8e88-4801-8efb-cc4435761e82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 09:55:03.449255    1650 out.go:291] Setting OutFile to fd 1 ...
	I0729 09:55:03.449403    1650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 09:55:03.449406    1650 out.go:304] Setting ErrFile to fd 2...
	I0729 09:55:03.449408    1650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 09:55:03.449519    1650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	W0729 09:55:03.449610    1650 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19345-1151/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19345-1151/.minikube/config/config.json: no such file or directory
	I0729 09:55:03.451015    1650 out.go:298] Setting JSON to true
	I0729 09:55:03.468323    1650 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1467,"bootTime":1722270636,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 09:55:03.468391    1650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 09:55:03.474145    1650 out.go:97] [download-only-221000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 09:55:03.474328    1650 notify.go:220] Checking for updates...
	W0729 09:55:03.474384    1650 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 09:55:03.477088    1650 out.go:169] MINIKUBE_LOCATION=19345
	I0729 09:55:03.480195    1650 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 09:55:03.485130    1650 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 09:55:03.488138    1650 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 09:55:03.491172    1650 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	W0729 09:55:03.497128    1650 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 09:55:03.497365    1650 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 09:55:03.502116    1650 out.go:97] Using the qemu2 driver based on user configuration
	I0729 09:55:03.502136    1650 start.go:297] selected driver: qemu2
	I0729 09:55:03.502158    1650 start.go:901] validating driver "qemu2" against <nil>
	I0729 09:55:03.502220    1650 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 09:55:03.505037    1650 out.go:169] Automatically selected the socket_vmnet network
	I0729 09:55:03.510785    1650 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 09:55:03.510901    1650 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 09:55:03.510928    1650 cni.go:84] Creating CNI manager for ""
	I0729 09:55:03.510945    1650 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 09:55:03.511006    1650 start.go:340] cluster config:
	{Name:download-only-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 09:55:03.516414    1650 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 09:55:03.519955    1650 out.go:97] Downloading VM boot image ...
	I0729 09:55:03.519974    1650 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 09:55:11.043286    1650 out.go:97] Starting "download-only-221000" primary control-plane node in "download-only-221000" cluster
	I0729 09:55:11.043321    1650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 09:55:11.102724    1650 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 09:55:11.102730    1650 cache.go:56] Caching tarball of preloaded images
	I0729 09:55:11.102895    1650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 09:55:11.108007    1650 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 09:55:11.108014    1650 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:11.188371    1650 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 09:55:23.142267    1650 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:23.142422    1650 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:23.836605    1650 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 09:55:23.836795    1650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/download-only-221000/config.json ...
	I0729 09:55:23.836814    1650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/download-only-221000/config.json: {Name:mk7d7e1298c35a725f1a1a40593756d5303c6732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 09:55:23.837617    1650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 09:55:23.837867    1650 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 09:55:24.226915    1650 out.go:169] 
	W0729 09:55:24.230929    1650 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60] Decompressors:map[bz2:0x1400068c0f0 gz:0x1400068c0f8 tar:0x1400068c0a0 tar.bz2:0x1400068c0b0 tar.gz:0x1400068c0c0 tar.xz:0x1400068c0d0 tar.zst:0x1400068c0e0 tbz2:0x1400068c0b0 tgz:0x1400068c0c0 txz:0x1400068c0d0 tzst:0x1400068c0e0 xz:0x1400068c100 zip:0x1400068c110 zst:0x1400068c108] Getters:map[file:0x140002ba730 http:0x14000bc4280 https:0x14000bc42d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 09:55:24.230952    1650 out_reason.go:110] 
	W0729 09:55:24.237914    1650 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 09:55:24.241895    1650 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-221000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (20.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-275000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-275000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.76809775s)

                                                
                                                
-- stdout --
	* [offline-docker-275000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-275000" primary control-plane node in "offline-docker-275000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-275000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:32:59.835467    4171 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:32:59.835637    4171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:32:59.835643    4171 out.go:304] Setting ErrFile to fd 2...
	I0729 10:32:59.835645    4171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:32:59.835774    4171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:32:59.837028    4171 out.go:298] Setting JSON to false
	I0729 10:32:59.854499    4171 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3743,"bootTime":1722270636,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:32:59.854570    4171 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:32:59.859965    4171 out.go:177] * [offline-docker-275000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:32:59.868225    4171 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:32:59.868307    4171 notify.go:220] Checking for updates...
	I0729 10:32:59.875181    4171 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:32:59.878209    4171 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:32:59.881177    4171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:32:59.884182    4171 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:32:59.887213    4171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:32:59.890537    4171 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:32:59.890595    4171 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:32:59.894141    4171 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:32:59.901107    4171 start.go:297] selected driver: qemu2
	I0729 10:32:59.901118    4171 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:32:59.901125    4171 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:32:59.902905    4171 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:32:59.906186    4171 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:32:59.909235    4171 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:32:59.909269    4171 cni.go:84] Creating CNI manager for ""
	I0729 10:32:59.909276    4171 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:32:59.909280    4171 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:32:59.909325    4171 start.go:340] cluster config:
	{Name:offline-docker-275000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:32:59.913012    4171 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:59.920167    4171 out.go:177] * Starting "offline-docker-275000" primary control-plane node in "offline-docker-275000" cluster
	I0729 10:32:59.924160    4171 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:32:59.924187    4171 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:32:59.924198    4171 cache.go:56] Caching tarball of preloaded images
	I0729 10:32:59.924272    4171 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:32:59.924277    4171 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:32:59.924345    4171 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/offline-docker-275000/config.json ...
	I0729 10:32:59.924355    4171 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/offline-docker-275000/config.json: {Name:mkf0c656a1bb96681adec7b769814ba9ee5795ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:32:59.924633    4171 start.go:360] acquireMachinesLock for offline-docker-275000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:32:59.924666    4171 start.go:364] duration metric: took 25.416µs to acquireMachinesLock for "offline-docker-275000"
	I0729 10:32:59.924678    4171 start.go:93] Provisioning new machine with config: &{Name:offline-docker-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:32:59.924710    4171 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:32:59.933180    4171 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:32:59.948934    4171 start.go:159] libmachine.API.Create for "offline-docker-275000" (driver="qemu2")
	I0729 10:32:59.948967    4171 client.go:168] LocalClient.Create starting
	I0729 10:32:59.949039    4171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:32:59.949073    4171 main.go:141] libmachine: Decoding PEM data...
	I0729 10:32:59.949082    4171 main.go:141] libmachine: Parsing certificate...
	I0729 10:32:59.949130    4171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:32:59.949154    4171 main.go:141] libmachine: Decoding PEM data...
	I0729 10:32:59.949161    4171 main.go:141] libmachine: Parsing certificate...
	I0729 10:32:59.949536    4171 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:33:00.101706    4171 main.go:141] libmachine: Creating SSH key...
	I0729 10:33:00.188793    4171 main.go:141] libmachine: Creating Disk image...
	I0729 10:33:00.188802    4171 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:33:00.188970    4171 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2
	I0729 10:33:00.198144    4171 main.go:141] libmachine: STDOUT: 
	I0729 10:33:00.198173    4171 main.go:141] libmachine: STDERR: 
	I0729 10:33:00.198237    4171 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2 +20000M
	I0729 10:33:00.206908    4171 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:33:00.206928    4171 main.go:141] libmachine: STDERR: 
	I0729 10:33:00.206953    4171 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2
	I0729 10:33:00.206958    4171 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:33:00.206972    4171 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:33:00.206997    4171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:be:85:fa:86:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2
	I0729 10:33:00.208708    4171 main.go:141] libmachine: STDOUT: 
	I0729 10:33:00.208722    4171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:33:00.208757    4171 client.go:171] duration metric: took 259.797208ms to LocalClient.Create
	I0729 10:33:02.210822    4171 start.go:128] duration metric: took 2.286206625s to createHost
	I0729 10:33:02.210847    4171 start.go:83] releasing machines lock for "offline-docker-275000", held for 2.286285458s
	W0729 10:33:02.210861    4171 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:02.215201    4171 out.go:177] * Deleting "offline-docker-275000" in qemu2 ...
	W0729 10:33:02.236071    4171 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:02.236082    4171 start.go:729] Will try again in 5 seconds ...
	I0729 10:33:07.237963    4171 start.go:360] acquireMachinesLock for offline-docker-275000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:33:07.238100    4171 start.go:364] duration metric: took 107.666µs to acquireMachinesLock for "offline-docker-275000"
	I0729 10:33:07.238133    4171 start.go:93] Provisioning new machine with config: &{Name:offline-docker-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:33:07.238213    4171 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:33:07.248894    4171 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:33:07.272226    4171 start.go:159] libmachine.API.Create for "offline-docker-275000" (driver="qemu2")
	I0729 10:33:07.272255    4171 client.go:168] LocalClient.Create starting
	I0729 10:33:07.272328    4171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:33:07.272372    4171 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:07.272401    4171 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:07.272443    4171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:33:07.272474    4171 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:07.272483    4171 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:07.272824    4171 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:33:07.428623    4171 main.go:141] libmachine: Creating SSH key...
	I0729 10:33:07.507279    4171 main.go:141] libmachine: Creating Disk image...
	I0729 10:33:07.507287    4171 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:33:07.507464    4171 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2
	I0729 10:33:07.516568    4171 main.go:141] libmachine: STDOUT: 
	I0729 10:33:07.516584    4171 main.go:141] libmachine: STDERR: 
	I0729 10:33:07.516633    4171 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2 +20000M
	I0729 10:33:07.524334    4171 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:33:07.524353    4171 main.go:141] libmachine: STDERR: 
	I0729 10:33:07.524363    4171 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2
	I0729 10:33:07.524368    4171 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:33:07.524376    4171 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:33:07.524414    4171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:96:25:cb:75:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/offline-docker-275000/disk.qcow2
	I0729 10:33:07.525999    4171 main.go:141] libmachine: STDOUT: 
	I0729 10:33:07.526018    4171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:33:07.526030    4171 client.go:171] duration metric: took 253.783875ms to LocalClient.Create
	I0729 10:33:09.528126    4171 start.go:128] duration metric: took 2.289996958s to createHost
	I0729 10:33:09.528188    4171 start.go:83] releasing machines lock for "offline-docker-275000", held for 2.290184583s
	W0729 10:33:09.528605    4171 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-275000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-275000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:09.543186    4171 out.go:177] 
	W0729 10:33:09.547331    4171 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:33:09.547353    4171 out.go:239] * 
	* 
	W0729 10:33:09.550281    4171 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:33:09.560094    4171 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-275000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 10:33:09.575764 -0700 PDT m=+2286.402261585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-275000 -n offline-docker-275000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-275000 -n offline-docker-275000: exit status 7 (68.393542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-275000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-275000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-275000
--- FAIL: TestOffline (9.92s)

                                                
                                    
x
+
TestCertOptions (10.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-456000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-456000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.809202041s)

                                                
                                                
-- stdout --
	* [cert-options-456000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-456000" primary control-plane node in "cert-options-456000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-456000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-456000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-456000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-456000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-456000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.507417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-456000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-456000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-456000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-456000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-456000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-456000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.918958ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-456000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-456000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-456000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-456000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-456000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 10:33:40.649328 -0700 PDT m=+2317.477303001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-456000 -n cert-options-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-456000 -n cert-options-456000: exit status 7 (31.081583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-456000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-456000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-456000
--- FAIL: TestCertOptions (10.07s)

                                                
                                    
x
+
TestCertExpiration (195.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-315000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-315000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.74741475s)

                                                
                                                
-- stdout --
	* [cert-expiration-315000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-315000" primary control-plane node in "cert-expiration-315000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-315000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-315000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-315000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.221484042s)

                                                
                                                
-- stdout --
	* [cert-expiration-315000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-315000" primary control-plane node in "cert-expiration-315000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-315000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-315000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-315000" primary control-plane node in "cert-expiration-315000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 10:36:40.64498 -0700 PDT m=+2497.481515585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-315000 -n cert-expiration-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-315000 -n cert-expiration-315000: exit status 7 (64.510334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-315000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-315000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-315000
--- FAIL: TestCertExpiration (195.12s)

                                                
                                    
x
+
TestDockerFlags (10.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-083000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-083000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.891265792s)

                                                
                                                
-- stdout --
	* [docker-flags-083000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-083000" primary control-plane node in "docker-flags-083000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-083000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:33:20.590225    4363 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:33:20.590359    4363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:33:20.590363    4363 out.go:304] Setting ErrFile to fd 2...
	I0729 10:33:20.590365    4363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:33:20.590497    4363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:33:20.591535    4363 out.go:298] Setting JSON to false
	I0729 10:33:20.607478    4363 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3764,"bootTime":1722270636,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:33:20.607544    4363 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:33:20.611508    4363 out.go:177] * [docker-flags-083000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:33:20.620225    4363 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:33:20.620272    4363 notify.go:220] Checking for updates...
	I0729 10:33:20.626229    4363 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:33:20.629217    4363 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:33:20.632141    4363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:33:20.635185    4363 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:33:20.638232    4363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:33:20.639979    4363 config.go:182] Loaded profile config "force-systemd-flag-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:33:20.640047    4363 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:33:20.640089    4363 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:33:20.644131    4363 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:33:20.651022    4363 start.go:297] selected driver: qemu2
	I0729 10:33:20.651031    4363 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:33:20.651040    4363 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:33:20.653373    4363 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:33:20.656202    4363 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:33:20.659239    4363 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 10:33:20.659255    4363 cni.go:84] Creating CNI manager for ""
	I0729 10:33:20.659264    4363 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:33:20.659269    4363 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:33:20.659302    4363 start.go:340] cluster config:
	{Name:docker-flags-083000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-083000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:33:20.663022    4363 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:33:20.670205    4363 out.go:177] * Starting "docker-flags-083000" primary control-plane node in "docker-flags-083000" cluster
	I0729 10:33:20.674152    4363 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:33:20.674166    4363 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:33:20.674173    4363 cache.go:56] Caching tarball of preloaded images
	I0729 10:33:20.674226    4363 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:33:20.674231    4363 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:33:20.674291    4363 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/docker-flags-083000/config.json ...
	I0729 10:33:20.674305    4363 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/docker-flags-083000/config.json: {Name:mk727b2322bae7eb0f3ef23257e767d762ac012a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:33:20.674514    4363 start.go:360] acquireMachinesLock for docker-flags-083000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:33:20.674548    4363 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "docker-flags-083000"
	I0729 10:33:20.674561    4363 start.go:93] Provisioning new machine with config: &{Name:docker-flags-083000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-083000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:33:20.674594    4363 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:33:20.679191    4363 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:33:20.696953    4363 start.go:159] libmachine.API.Create for "docker-flags-083000" (driver="qemu2")
	I0729 10:33:20.696979    4363 client.go:168] LocalClient.Create starting
	I0729 10:33:20.697043    4363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:33:20.697071    4363 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:20.697081    4363 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:20.697119    4363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:33:20.697141    4363 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:20.697149    4363 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:20.697472    4363 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:33:20.851466    4363 main.go:141] libmachine: Creating SSH key...
	I0729 10:33:20.966428    4363 main.go:141] libmachine: Creating Disk image...
	I0729 10:33:20.966441    4363 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:33:20.966615    4363 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2
	I0729 10:33:20.975975    4363 main.go:141] libmachine: STDOUT: 
	I0729 10:33:20.975994    4363 main.go:141] libmachine: STDERR: 
	I0729 10:33:20.976052    4363 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2 +20000M
	I0729 10:33:20.983842    4363 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:33:20.983855    4363 main.go:141] libmachine: STDERR: 
	I0729 10:33:20.983875    4363 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2
	I0729 10:33:20.983879    4363 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:33:20.983894    4363 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:33:20.983922    4363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:f1:d0:14:3c:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2
	I0729 10:33:20.985516    4363 main.go:141] libmachine: STDOUT: 
	I0729 10:33:20.985535    4363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:33:20.985552    4363 client.go:171] duration metric: took 288.582292ms to LocalClient.Create
	I0729 10:33:22.987674    4363 start.go:128] duration metric: took 2.313169417s to createHost
	I0729 10:33:22.987743    4363 start.go:83] releasing machines lock for "docker-flags-083000", held for 2.313295542s
	W0729 10:33:22.987840    4363 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:23.004962    4363 out.go:177] * Deleting "docker-flags-083000" in qemu2 ...
	W0729 10:33:23.032475    4363 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:23.032499    4363 start.go:729] Will try again in 5 seconds ...
	I0729 10:33:28.034433    4363 start.go:360] acquireMachinesLock for docker-flags-083000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:33:28.055321    4363 start.go:364] duration metric: took 20.756083ms to acquireMachinesLock for "docker-flags-083000"
	I0729 10:33:28.055463    4363 start.go:93] Provisioning new machine with config: &{Name:docker-flags-083000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-083000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:33:28.055757    4363 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:33:28.066442    4363 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:33:28.117323    4363 start.go:159] libmachine.API.Create for "docker-flags-083000" (driver="qemu2")
	I0729 10:33:28.117376    4363 client.go:168] LocalClient.Create starting
	I0729 10:33:28.117504    4363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:33:28.117567    4363 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:28.117587    4363 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:28.117663    4363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:33:28.117708    4363 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:28.117718    4363 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:28.118334    4363 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:33:28.282684    4363 main.go:141] libmachine: Creating SSH key...
	I0729 10:33:28.380159    4363 main.go:141] libmachine: Creating Disk image...
	I0729 10:33:28.380164    4363 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:33:28.380341    4363 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2
	I0729 10:33:28.389730    4363 main.go:141] libmachine: STDOUT: 
	I0729 10:33:28.389750    4363 main.go:141] libmachine: STDERR: 
	I0729 10:33:28.389800    4363 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2 +20000M
	I0729 10:33:28.397523    4363 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:33:28.397547    4363 main.go:141] libmachine: STDERR: 
	I0729 10:33:28.397557    4363 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2
	I0729 10:33:28.397563    4363 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:33:28.397570    4363 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:33:28.397601    4363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:fe:72:1f:e8:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/docker-flags-083000/disk.qcow2
	I0729 10:33:28.399227    4363 main.go:141] libmachine: STDOUT: 
	I0729 10:33:28.399244    4363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:33:28.399257    4363 client.go:171] duration metric: took 281.88925ms to LocalClient.Create
	I0729 10:33:30.401377    4363 start.go:128] duration metric: took 2.345692292s to createHost
	I0729 10:33:30.401447    4363 start.go:83] releasing machines lock for "docker-flags-083000", held for 2.346214542s
	W0729 10:33:30.401901    4363 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-083000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-083000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:30.420687    4363 out.go:177] 
	W0729 10:33:30.428671    4363 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:33:30.428710    4363 out.go:239] * 
	* 
	W0729 10:33:30.431524    4363 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:33:30.439467    4363 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-083000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-083000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-083000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.813958ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-083000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-083000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-083000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-083000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-083000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-083000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-083000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-083000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-083000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.787666ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-083000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-083000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-083000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-083000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-083000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-083000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 10:33:30.580581 -0700 PDT m=+2307.408077293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-083000 -n docker-flags-083000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-083000 -n docker-flags-083000: exit status 7 (29.226166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-083000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-083000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-083000
--- FAIL: TestDockerFlags (10.12s)

                                                
                                    
x
+
TestForceSystemdFlag (10.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-813000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-813000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.828885667s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-813000" primary control-plane node in "force-systemd-flag-813000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-813000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:33:15.685611    4340 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:33:15.685730    4340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:33:15.685734    4340 out.go:304] Setting ErrFile to fd 2...
	I0729 10:33:15.685737    4340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:33:15.685872    4340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:33:15.686958    4340 out.go:298] Setting JSON to false
	I0729 10:33:15.702806    4340 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3759,"bootTime":1722270636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:33:15.702865    4340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:33:15.708036    4340 out.go:177] * [force-systemd-flag-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:33:15.714884    4340 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:33:15.714998    4340 notify.go:220] Checking for updates...
	I0729 10:33:15.722904    4340 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:33:15.725892    4340 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:33:15.728898    4340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:33:15.731892    4340 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:33:15.734909    4340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:33:15.738301    4340 config.go:182] Loaded profile config "force-systemd-env-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:33:15.738372    4340 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:33:15.738424    4340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:33:15.742871    4340 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:33:15.749784    4340 start.go:297] selected driver: qemu2
	I0729 10:33:15.749789    4340 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:33:15.749796    4340 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:33:15.752153    4340 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:33:15.754827    4340 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:33:15.757959    4340 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:33:15.757976    4340 cni.go:84] Creating CNI manager for ""
	I0729 10:33:15.757984    4340 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:33:15.757993    4340 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:33:15.758025    4340 start.go:340] cluster config:
	{Name:force-systemd-flag-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:33:15.761681    4340 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:33:15.768847    4340 out.go:177] * Starting "force-systemd-flag-813000" primary control-plane node in "force-systemd-flag-813000" cluster
	I0729 10:33:15.772723    4340 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:33:15.772737    4340 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:33:15.772748    4340 cache.go:56] Caching tarball of preloaded images
	I0729 10:33:15.772802    4340 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:33:15.772808    4340 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:33:15.772888    4340 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/force-systemd-flag-813000/config.json ...
	I0729 10:33:15.772901    4340 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/force-systemd-flag-813000/config.json: {Name:mk59a2ea4428e31d859b7f3c27023aeb7fdea5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:33:15.773262    4340 start.go:360] acquireMachinesLock for force-systemd-flag-813000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:33:15.773298    4340 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "force-systemd-flag-813000"
	I0729 10:33:15.773311    4340 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:33:15.773339    4340 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:33:15.781860    4340 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:33:15.799770    4340 start.go:159] libmachine.API.Create for "force-systemd-flag-813000" (driver="qemu2")
	I0729 10:33:15.799806    4340 client.go:168] LocalClient.Create starting
	I0729 10:33:15.799882    4340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:33:15.799913    4340 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:15.799921    4340 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:15.799964    4340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:33:15.799988    4340 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:15.799997    4340 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:15.800410    4340 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:33:15.957005    4340 main.go:141] libmachine: Creating SSH key...
	I0729 10:33:16.013100    4340 main.go:141] libmachine: Creating Disk image...
	I0729 10:33:16.013105    4340 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:33:16.013280    4340 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2
	I0729 10:33:16.022291    4340 main.go:141] libmachine: STDOUT: 
	I0729 10:33:16.022304    4340 main.go:141] libmachine: STDERR: 
	I0729 10:33:16.022347    4340 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2 +20000M
	I0729 10:33:16.030053    4340 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:33:16.030075    4340 main.go:141] libmachine: STDERR: 
	I0729 10:33:16.030090    4340 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2
	I0729 10:33:16.030094    4340 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:33:16.030104    4340 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:33:16.030130    4340 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f0:7a:ca:ae:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2
	I0729 10:33:16.031734    4340 main.go:141] libmachine: STDOUT: 
	I0729 10:33:16.031748    4340 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:33:16.031766    4340 client.go:171] duration metric: took 231.966791ms to LocalClient.Create
	I0729 10:33:18.033847    4340 start.go:128] duration metric: took 2.260594167s to createHost
	I0729 10:33:18.033925    4340 start.go:83] releasing machines lock for "force-systemd-flag-813000", held for 2.260723667s
	W0729 10:33:18.033970    4340 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:18.062120    4340 out.go:177] * Deleting "force-systemd-flag-813000" in qemu2 ...
	W0729 10:33:18.085257    4340 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:18.085278    4340 start.go:729] Will try again in 5 seconds ...
	I0729 10:33:23.087180    4340 start.go:360] acquireMachinesLock for force-systemd-flag-813000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:33:23.087562    4340 start.go:364] duration metric: took 301.084µs to acquireMachinesLock for "force-systemd-flag-813000"
	I0729 10:33:23.087646    4340 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:33:23.087905    4340 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:33:23.096964    4340 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:33:23.147531    4340 start.go:159] libmachine.API.Create for "force-systemd-flag-813000" (driver="qemu2")
	I0729 10:33:23.147583    4340 client.go:168] LocalClient.Create starting
	I0729 10:33:23.147694    4340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:33:23.147757    4340 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:23.147771    4340 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:23.147827    4340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:33:23.147874    4340 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:23.147889    4340 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:23.148907    4340 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:33:23.322178    4340 main.go:141] libmachine: Creating SSH key...
	I0729 10:33:23.423807    4340 main.go:141] libmachine: Creating Disk image...
	I0729 10:33:23.423812    4340 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:33:23.423980    4340 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2
	I0729 10:33:23.433257    4340 main.go:141] libmachine: STDOUT: 
	I0729 10:33:23.433270    4340 main.go:141] libmachine: STDERR: 
	I0729 10:33:23.433321    4340 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2 +20000M
	I0729 10:33:23.441103    4340 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:33:23.441116    4340 main.go:141] libmachine: STDERR: 
	I0729 10:33:23.441126    4340 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2
	I0729 10:33:23.441130    4340 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:33:23.441140    4340 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:33:23.441163    4340 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:c4:ff:98:2a:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-flag-813000/disk.qcow2
	I0729 10:33:23.442734    4340 main.go:141] libmachine: STDOUT: 
	I0729 10:33:23.442747    4340 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:33:23.442758    4340 client.go:171] duration metric: took 295.182875ms to LocalClient.Create
	I0729 10:33:25.444834    4340 start.go:128] duration metric: took 2.357009833s to createHost
	I0729 10:33:25.444943    4340 start.go:83] releasing machines lock for "force-systemd-flag-813000", held for 2.3574675s
	W0729 10:33:25.445271    4340 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:25.455861    4340 out.go:177] 
	W0729 10:33:25.462989    4340 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:33:25.463019    4340 out.go:239] * 
	* 
	W0729 10:33:25.465762    4340 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:33:25.474851    4340 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-813000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-813000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-813000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.042292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-813000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-813000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-813000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 10:33:25.570777 -0700 PDT m=+2302.398034876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-813000 -n force-systemd-flag-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-813000 -n force-systemd-flag-813000: exit status 7 (34.173875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-813000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-813000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-813000
--- FAIL: TestForceSystemdFlag (10.02s)

                                                
                                    
x
+
TestForceSystemdEnv (10.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-193000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-193000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.651533542s)

                                                
                                                
-- stdout --
	* [force-systemd-env-193000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-193000" primary control-plane node in "force-systemd-env-193000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-193000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:33:09.751265    4308 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:33:09.751391    4308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:33:09.751395    4308 out.go:304] Setting ErrFile to fd 2...
	I0729 10:33:09.751397    4308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:33:09.751537    4308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:33:09.752562    4308 out.go:298] Setting JSON to false
	I0729 10:33:09.768485    4308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3753,"bootTime":1722270636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:33:09.768559    4308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:33:09.775366    4308 out.go:177] * [force-systemd-env-193000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:33:09.784341    4308 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:33:09.784392    4308 notify.go:220] Checking for updates...
	I0729 10:33:09.791237    4308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:33:09.794440    4308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:33:09.797202    4308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:33:09.800249    4308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:33:09.803283    4308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 10:33:09.806590    4308 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:33:09.806640    4308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:33:09.811210    4308 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:33:09.817298    4308 start.go:297] selected driver: qemu2
	I0729 10:33:09.817306    4308 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:33:09.817313    4308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:33:09.819567    4308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:33:09.822253    4308 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:33:09.825344    4308 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:33:09.825376    4308 cni.go:84] Creating CNI manager for ""
	I0729 10:33:09.825384    4308 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:33:09.825388    4308 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:33:09.825426    4308 start.go:340] cluster config:
	{Name:force-systemd-env-193000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:33:09.829163    4308 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:33:09.836195    4308 out.go:177] * Starting "force-systemd-env-193000" primary control-plane node in "force-systemd-env-193000" cluster
	I0729 10:33:09.840294    4308 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:33:09.840316    4308 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:33:09.840329    4308 cache.go:56] Caching tarball of preloaded images
	I0729 10:33:09.840386    4308 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:33:09.840392    4308 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:33:09.840441    4308 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/force-systemd-env-193000/config.json ...
	I0729 10:33:09.840452    4308 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/force-systemd-env-193000/config.json: {Name:mkf136ce8fed88d742ffad288c259e534b4a844c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:33:09.840705    4308 start.go:360] acquireMachinesLock for force-systemd-env-193000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:33:09.840751    4308 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "force-systemd-env-193000"
	I0729 10:33:09.840763    4308 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:33:09.840808    4308 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:33:09.849251    4308 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:33:09.866015    4308 start.go:159] libmachine.API.Create for "force-systemd-env-193000" (driver="qemu2")
	I0729 10:33:09.866042    4308 client.go:168] LocalClient.Create starting
	I0729 10:33:09.866105    4308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:33:09.866136    4308 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:09.866145    4308 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:09.866184    4308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:33:09.866210    4308 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:09.866219    4308 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:09.866572    4308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:33:10.022295    4308 main.go:141] libmachine: Creating SSH key...
	I0729 10:33:10.100529    4308 main.go:141] libmachine: Creating Disk image...
	I0729 10:33:10.100535    4308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:33:10.100705    4308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2
	I0729 10:33:10.109976    4308 main.go:141] libmachine: STDOUT: 
	I0729 10:33:10.109991    4308 main.go:141] libmachine: STDERR: 
	I0729 10:33:10.110044    4308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2 +20000M
	I0729 10:33:10.118199    4308 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:33:10.118213    4308 main.go:141] libmachine: STDERR: 
	I0729 10:33:10.118235    4308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2
	I0729 10:33:10.118243    4308 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:33:10.118261    4308 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:33:10.118286    4308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:55:0f:f1:43:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2
	I0729 10:33:10.119936    4308 main.go:141] libmachine: STDOUT: 
	I0729 10:33:10.119950    4308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:33:10.119975    4308 client.go:171] duration metric: took 253.940833ms to LocalClient.Create
	I0729 10:33:12.121948    4308 start.go:128] duration metric: took 2.281242459s to createHost
	I0729 10:33:12.121976    4308 start.go:83] releasing machines lock for "force-systemd-env-193000", held for 2.281328959s
	W0729 10:33:12.121988    4308 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:12.131543    4308 out.go:177] * Deleting "force-systemd-env-193000" in qemu2 ...
	W0729 10:33:12.142906    4308 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:12.142920    4308 start.go:729] Will try again in 5 seconds ...
	I0729 10:33:17.143453    4308 start.go:360] acquireMachinesLock for force-systemd-env-193000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:33:18.034075    4308 start.go:364] duration metric: took 890.533459ms to acquireMachinesLock for "force-systemd-env-193000"
	I0729 10:33:18.034238    4308 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:33:18.034489    4308 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:33:18.050124    4308 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:33:18.097767    4308 start.go:159] libmachine.API.Create for "force-systemd-env-193000" (driver="qemu2")
	I0729 10:33:18.097837    4308 client.go:168] LocalClient.Create starting
	I0729 10:33:18.097980    4308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:33:18.098062    4308 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:18.098077    4308 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:18.098143    4308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:33:18.098192    4308 main.go:141] libmachine: Decoding PEM data...
	I0729 10:33:18.098205    4308 main.go:141] libmachine: Parsing certificate...
	I0729 10:33:18.098743    4308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:33:18.262292    4308 main.go:141] libmachine: Creating SSH key...
	I0729 10:33:18.310944    4308 main.go:141] libmachine: Creating Disk image...
	I0729 10:33:18.310948    4308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:33:18.311116    4308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2
	I0729 10:33:18.320666    4308 main.go:141] libmachine: STDOUT: 
	I0729 10:33:18.320680    4308 main.go:141] libmachine: STDERR: 
	I0729 10:33:18.320728    4308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2 +20000M
	I0729 10:33:18.328608    4308 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:33:18.328627    4308 main.go:141] libmachine: STDERR: 
	I0729 10:33:18.328638    4308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2
	I0729 10:33:18.328642    4308 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:33:18.328652    4308 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:33:18.328678    4308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:f1:9b:89:0c:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/force-systemd-env-193000/disk.qcow2
	I0729 10:33:18.330304    4308 main.go:141] libmachine: STDOUT: 
	I0729 10:33:18.330329    4308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:33:18.330341    4308 client.go:171] duration metric: took 232.509083ms to LocalClient.Create
	I0729 10:33:20.332372    4308 start.go:128] duration metric: took 2.297941208s to createHost
	I0729 10:33:20.332449    4308 start.go:83] releasing machines lock for "force-systemd-env-193000", held for 2.298447458s
	W0729 10:33:20.332915    4308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:33:20.338789    4308 out.go:177] 
	W0729 10:33:20.346657    4308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:33:20.346693    4308 out.go:239] * 
	* 
	W0729 10:33:20.349417    4308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:33:20.358594    4308 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-193000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-193000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-193000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.645917ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-193000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-193000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 10:33:20.451225 -0700 PDT m=+2297.278239668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-193000 -n force-systemd-env-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-193000 -n force-systemd-env-193000: exit status 7 (34.292541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-193000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-193000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-193000
--- FAIL: TestForceSystemdEnv (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (38.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-398000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-398000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-z4svv" [d3c5ffef-ed2a-478e-af88-a2dea7d41f33] Pending
helpers_test.go:344: "hello-node-connect-6f49f58cd5-z4svv" [d3c5ffef-ed2a-478e-af88-a2dea7d41f33] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-z4svv" [d3c5ffef-ed2a-478e-af88-a2dea7d41f33] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.00323575s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31791
functional_test.go:1657: error fetching http://192.168.105.4:31791: Get "http://192.168.105.4:31791": dial tcp 192.168.105.4:31791: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31791: Get "http://192.168.105.4:31791": dial tcp 192.168.105.4:31791: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31791: Get "http://192.168.105.4:31791": dial tcp 192.168.105.4:31791: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31791: Get "http://192.168.105.4:31791": dial tcp 192.168.105.4:31791: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31791: Get "http://192.168.105.4:31791": dial tcp 192.168.105.4:31791: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31791: Get "http://192.168.105.4:31791": dial tcp 192.168.105.4:31791: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31791: Get "http://192.168.105.4:31791": dial tcp 192.168.105.4:31791: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31791: Get "http://192.168.105.4:31791": dial tcp 192.168.105.4:31791: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-398000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-z4svv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-398000/192.168.105.4
Start Time:       Mon, 29 Jul 2024 10:06:14 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://adc298b84bb81bc2c4ef934be781bd30ef56954519071464062220e9a1a7afc1
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 29 Jul 2024 10:06:34 -0700
Finished:     Mon, 29 Jul 2024 10:06:34 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68xhq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-68xhq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  37s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-z4svv to functional-398000
Normal   Pulling    37s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     32s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.483s (4.483s including waiting). Image size: 84957542 bytes.
Normal   Created    17s (x3 over 32s)  kubelet            Created container echoserver-arm
Normal   Started    17s (x3 over 32s)  kubelet            Started container echoserver-arm
Normal   Pulled     17s (x2 over 31s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    4s (x4 over 30s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-z4svv_default(d3c5ffef-ed2a-478e-af88-a2dea7d41f33)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-398000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-398000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.91.106
IPs:                      10.108.91.106
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31791/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-398000 -n functional-398000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-398000                                                                                                | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3890308663/001:/mount-9p     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh findmnt                                                                                       | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh -- ls                                                                                         | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh cat                                                                                           | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | /mount-9p/test-1722272797659490000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh stat                                                                                          | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh stat                                                                                          | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh sudo                                                                                          | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh findmnt                                                                                       | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-398000                                                                                                | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port796730148/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh findmnt                                                                                       | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh -- ls                                                                                         | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh sudo                                                                                          | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-398000                                                                                                | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh findmnt                                                                                       | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-398000                                                                                                | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-398000                                                                                                | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh findmnt                                                                                       | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh findmnt                                                                                       | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh findmnt                                                                                       | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-398000 ssh findmnt                                                                                       | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT | 29 Jul 24 10:06 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-398000                                                                                                | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-398000                                                                                                | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-398000                                                                                                | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-398000 --dry-run                                                                                      | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-398000 | jenkins | v1.33.1 | 29 Jul 24 10:06 PDT |                     |
	|           | -p functional-398000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:06:46
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:06:46.230102    2590 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:06:46.230231    2590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:06:46.230235    2590 out.go:304] Setting ErrFile to fd 2...
	I0729 10:06:46.230240    2590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:06:46.230376    2590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:06:46.231358    2590 out.go:298] Setting JSON to false
	I0729 10:06:46.247639    2590 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2170,"bootTime":1722270636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:06:46.247707    2590 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:06:46.252168    2590 out.go:177] * [functional-398000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:06:46.259051    2590 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:06:46.259098    2590 notify.go:220] Checking for updates...
	I0729 10:06:46.266011    2590 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:06:46.269026    2590 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:06:46.272052    2590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:06:46.275058    2590 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:06:46.278043    2590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:06:46.281353    2590 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:06:46.281590    2590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:06:46.285040    2590 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:06:46.292073    2590 start.go:297] selected driver: qemu2
	I0729 10:06:46.292077    2590 start.go:901] validating driver "qemu2" against &{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:06:46.292118    2590 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:06:46.294237    2590 cni.go:84] Creating CNI manager for ""
	I0729 10:06:46.294258    2590 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:06:46.294296    2590 start.go:340] cluster config:
	{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-398000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:06:46.306109    2590 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Jul 29 17:06:39 functional-398000 dockerd[6035]: time="2024-07-29T17:06:39.895854503Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 17:06:41 functional-398000 dockerd[6029]: time="2024-07-29T17:06:41.401627289Z" level=info msg="ignoring event" container=b4fb26bf83a806929b7059b0d02b6bdf228175dadce408ee18933ef2ac17ea8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 17:06:41 functional-398000 dockerd[6035]: time="2024-07-29T17:06:41.401827502Z" level=info msg="shim disconnected" id=b4fb26bf83a806929b7059b0d02b6bdf228175dadce408ee18933ef2ac17ea8e namespace=moby
	Jul 29 17:06:41 functional-398000 dockerd[6035]: time="2024-07-29T17:06:41.401856490Z" level=warning msg="cleaning up after shim disconnected" id=b4fb26bf83a806929b7059b0d02b6bdf228175dadce408ee18933ef2ac17ea8e namespace=moby
	Jul 29 17:06:41 functional-398000 dockerd[6035]: time="2024-07-29T17:06:41.401861988Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 17:06:45 functional-398000 dockerd[6035]: time="2024-07-29T17:06:45.910167615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 17:06:45 functional-398000 dockerd[6035]: time="2024-07-29T17:06:45.910386778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 17:06:45 functional-398000 dockerd[6035]: time="2024-07-29T17:06:45.910393525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 17:06:45 functional-398000 dockerd[6035]: time="2024-07-29T17:06:45.910601818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 17:06:45 functional-398000 dockerd[6035]: time="2024-07-29T17:06:45.940030585Z" level=info msg="shim disconnected" id=aeaf2470b7adbb0fb51ca2ea2fffd3aaa9dc12129d3af210d776e9026858d567 namespace=moby
	Jul 29 17:06:45 functional-398000 dockerd[6035]: time="2024-07-29T17:06:45.940057116Z" level=warning msg="cleaning up after shim disconnected" id=aeaf2470b7adbb0fb51ca2ea2fffd3aaa9dc12129d3af210d776e9026858d567 namespace=moby
	Jul 29 17:06:45 functional-398000 dockerd[6035]: time="2024-07-29T17:06:45.940062405Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 17:06:45 functional-398000 dockerd[6029]: time="2024-07-29T17:06:45.940173153Z" level=info msg="ignoring event" container=aeaf2470b7adbb0fb51ca2ea2fffd3aaa9dc12129d3af210d776e9026858d567 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 17:06:47 functional-398000 dockerd[6035]: time="2024-07-29T17:06:47.150854859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 17:06:47 functional-398000 dockerd[6035]: time="2024-07-29T17:06:47.150907088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 17:06:47 functional-398000 dockerd[6035]: time="2024-07-29T17:06:47.150915918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 17:06:47 functional-398000 dockerd[6035]: time="2024-07-29T17:06:47.151111050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 17:06:47 functional-398000 dockerd[6035]: time="2024-07-29T17:06:47.173668853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 17:06:47 functional-398000 dockerd[6035]: time="2024-07-29T17:06:47.173700757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 17:06:47 functional-398000 dockerd[6035]: time="2024-07-29T17:06:47.173793638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 17:06:47 functional-398000 dockerd[6035]: time="2024-07-29T17:06:47.173852323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 17:06:47 functional-398000 cri-dockerd[6282]: time="2024-07-29T17:06:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/94f7d07b9e6cc76f503ab269d10e32d18f1f5ac3b77a16bef0998e0dbf8cee83/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 17:06:47 functional-398000 cri-dockerd[6282]: time="2024-07-29T17:06:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a08e4505b003cd0ca64fd70e575b93ee849374d2ed0c7ccd2d253a2f2a97fff3/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 17:06:47 functional-398000 dockerd[6029]: time="2024-07-29T17:06:47.469802633Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Jul 29 17:06:52 functional-398000 cri-dockerd[6282]: time="2024-07-29T17:06:52Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	881af5e0ef689       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        Less than a second ago   Created             kubernetes-dashboard      0                   94f7d07b9e6cc       kubernetes-dashboard-779776cb65-j5d99
	aeaf2470b7adb       72565bf5bbedf                                                                                         7 seconds ago            Exited              echoserver-arm            2                   1391c3f4ca07a       hello-node-65f5d5cc78-8xsw8
	dfae690ff2b8b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 seconds ago           Exited              mount-munger              0                   b4fb26bf83a80       busybox-mount
	adc298b84bb81       72565bf5bbedf                                                                                         18 seconds ago           Exited              echoserver-arm            2                   f9518c74e1854       hello-node-connect-6f49f58cd5-z4svv
	b12bfa6eb13eb       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         29 seconds ago           Running             myfrontend                0                   c90f5bb0041e0       sp-pod
	b192c8498f7a6       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         45 seconds ago           Running             nginx                     0                   9c75400a08703       nginx-svc
	492d63cb39943       2437cf7621777                                                                                         About a minute ago       Running             coredns                   2                   01c00fea55c78       coredns-7db6d8ff4d-q9z65
	64788679b772c       2351f570ed0ea                                                                                         About a minute ago       Running             kube-proxy                2                   069387c5b13c9       kube-proxy-wwlq6
	e4e14d1760fb5       ba04bb24b9575                                                                                         About a minute ago       Running             storage-provisioner       2                   b2029f08ae664       storage-provisioner
	aa4fa74f54f76       8e97cdb19e7cc                                                                                         About a minute ago       Running             kube-controller-manager   2                   f6a3f99613638       kube-controller-manager-functional-398000
	cf129587262b1       014faa467e297                                                                                         About a minute ago       Running             etcd                      2                   e27235c306d5c       etcd-functional-398000
	2145d6c961baf       d48f992a22722                                                                                         About a minute ago       Running             kube-scheduler            2                   9288fc59c506f       kube-scheduler-functional-398000
	21068b84c8fec       61773190d42ff                                                                                         About a minute ago       Running             kube-apiserver            0                   7c6ac08d01fd6       kube-apiserver-functional-398000
	dc5de00310392       2437cf7621777                                                                                         2 minutes ago            Exited              coredns                   1                   9e95f2c96d802       coredns-7db6d8ff4d-q9z65
	bffaee85a515c       ba04bb24b9575                                                                                         2 minutes ago            Exited              storage-provisioner       1                   98416bf82b91b       storage-provisioner
	d510240379602       2351f570ed0ea                                                                                         2 minutes ago            Exited              kube-proxy                1                   9cbf39fd14f4d       kube-proxy-wwlq6
	c4d019579fac9       014faa467e297                                                                                         2 minutes ago            Exited              etcd                      1                   c5b6eced758b6       etcd-functional-398000
	cdac156deb6b3       d48f992a22722                                                                                         2 minutes ago            Exited              kube-scheduler            1                   a480f0880b4e3       kube-scheduler-functional-398000
	e5621e3509aaa       8e97cdb19e7cc                                                                                         2 minutes ago            Exited              kube-controller-manager   1                   8852e6b310ffc       kube-controller-manager-functional-398000
	
	
	==> coredns [492d63cb3994] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44736 - 37237 "HINFO IN 8836219161310180846.7550373743425035404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.18601706s
	[INFO] 10.244.0.1:28086 - 50132 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000104789s
	[INFO] 10.244.0.1:38117 - 64315 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000058933s
	[INFO] 10.244.0.1:27181 - 25389 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000024864s
	[INFO] 10.244.0.1:55236 - 8602 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001042814s
	[INFO] 10.244.0.1:46393 - 12621 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000067972s
	[INFO] 10.244.0.1:1878 - 42814 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000169304s
	
	
	==> coredns [dc5de0031039] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41641 - 27756 "HINFO IN 7506571901093726177.5734272684203726129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016189061s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-398000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-398000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=functional-398000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_04_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:04:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-398000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:06:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:06:38 +0000   Mon, 29 Jul 2024 17:04:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:06:38 +0000   Mon, 29 Jul 2024 17:04:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:06:38 +0000   Mon, 29 Jul 2024 17:04:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:06:38 +0000   Mon, 29 Jul 2024 17:04:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-398000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 c735190d145840848056022fc71e3c74
	  System UUID:                c735190d145840848056022fc71e3c74
	  Boot ID:                    1987d138-137d-40ff-ba84-1d5db818713b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-8xsw8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  default                     hello-node-connect-6f49f58cd5-z4svv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 coredns-7db6d8ff4d-q9z65                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m21s
	  kube-system                 etcd-functional-398000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-apiserver-functional-398000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-398000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-proxy-wwlq6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-scheduler-functional-398000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-lt4dq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-j5d99        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m21s                kube-proxy       
	  Normal  Starting                 73s                  kube-proxy       
	  Normal  Starting                 119s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m36s                kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m36s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m36s                kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s                kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m36s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m32s                kubelet          Node functional-398000 status is now: NodeReady
	  Normal  RegisteredNode           2m23s                node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           107s                 node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	  Normal  Starting                 78s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)    kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)    kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)    kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                  node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	
	
	==> dmesg <==
	[Jul29 17:05] kauditd_printk_skb: 31 callbacks suppressed
	[  +4.509570] systemd-fstab-generator[5133]: Ignoring "noauto" option for root device
	[ +10.539760] systemd-fstab-generator[5560]: Ignoring "noauto" option for root device
	[  +0.053162] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.104409] systemd-fstab-generator[5596]: Ignoring "noauto" option for root device
	[  +0.092689] systemd-fstab-generator[5607]: Ignoring "noauto" option for root device
	[  +0.103959] systemd-fstab-generator[5621]: Ignoring "noauto" option for root device
	[  +5.115692] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.398753] systemd-fstab-generator[6235]: Ignoring "noauto" option for root device
	[  +0.072247] systemd-fstab-generator[6247]: Ignoring "noauto" option for root device
	[  +0.072533] systemd-fstab-generator[6259]: Ignoring "noauto" option for root device
	[  +0.083233] systemd-fstab-generator[6274]: Ignoring "noauto" option for root device
	[  +0.229847] systemd-fstab-generator[6440]: Ignoring "noauto" option for root device
	[  +1.059041] systemd-fstab-generator[6565]: Ignoring "noauto" option for root device
	[  +3.392988] kauditd_printk_skb: 199 callbacks suppressed
	[ +12.666558] kauditd_printk_skb: 31 callbacks suppressed
	[  +4.414837] systemd-fstab-generator[7586]: Ignoring "noauto" option for root device
	[  +3.637322] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 17:06] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.796960] kauditd_printk_skb: 13 callbacks suppressed
	[  +9.566009] kauditd_printk_skb: 23 callbacks suppressed
	[ +11.001083] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.282250] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.199376] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.385315] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c4d019579fac] <==
	{"level":"info","ts":"2024-07-29T17:04:50.251971Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T17:04:51.612249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T17:04:51.612396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T17:04:51.612441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-29T17:04:51.612468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T17:04:51.612489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T17:04:51.612514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T17:04:51.61253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T17:04:51.615938Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:04:51.615972Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:04:51.615941Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-398000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T17:04:51.618321Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T17:04:51.61835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T17:04:51.618606Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T17:04:51.620628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-29T17:05:20.804026Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T17:05:20.804049Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-398000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-29T17:05:20.804089Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:05:20.804125Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:05:20.81185Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:05:20.811875Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T17:05:20.811896Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-29T17:05:20.813086Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T17:05:20.813121Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T17:05:20.813125Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-398000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [cf129587262b] <==
	{"level":"info","ts":"2024-07-29T17:05:35.746256Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T17:05:35.744018Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T17:05:35.746274Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T17:05:35.744097Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T17:05:35.746279Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T17:05:35.746284Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T17:05:35.750134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-29T17:05:35.753958Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-29T17:05:35.754016Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:05:35.754074Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:05:36.70349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T17:05:36.703635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T17:05:36.703705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T17:05:36.703739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T17:05:36.703754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T17:05:36.703778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-29T17:05:36.703952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T17:05:36.709046Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:05:36.709607Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:05:36.70905Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-398000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T17:05:36.710418Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T17:05:36.710486Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T17:05:36.714283Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-29T17:05:36.71428Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T17:06:14.300299Z","caller":"traceutil/trace.go:171","msg":"trace[1589817438] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"189.56884ms","start":"2024-07-29T17:06:14.110706Z","end":"2024-07-29T17:06:14.300275Z","steps":["trace[1589817438] 'process raft request'  (duration: 189.332689ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:06:52 up 2 min,  0 users,  load average: 0.84, 0.41, 0.16
	Linux functional-398000 5.10.207 #1 SMP PREEMPT Tue Jul 23 01:19:38 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [21068b84c8fe] <==
	I0729 17:05:37.328545       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 17:05:37.328582       1 aggregator.go:165] initial CRD sync complete...
	I0729 17:05:37.328590       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 17:05:37.328595       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 17:05:37.328602       1 cache.go:39] Caches are synced for autoregister controller
	I0729 17:05:37.358957       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 17:05:37.360084       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 17:05:37.360103       1 policy_source.go:224] refreshing policies
	I0729 17:05:37.360157       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 17:05:38.230711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 17:05:38.460229       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 17:05:38.463957       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 17:05:38.474074       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 17:05:38.481899       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 17:05:38.483754       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 17:05:50.702186       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 17:05:50.802985       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 17:05:58.851732       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.79.214"}
	I0729 17:06:03.749646       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.166.230"}
	I0729 17:06:14.302074       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 17:06:14.367545       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.91.106"}
	I0729 17:06:30.441604       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.161.26"}
	I0729 17:06:46.728385       1 controller.go:615] quota admission added evaluator for: namespaces
	I0729 17:06:46.815804       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.50.20"}
	I0729 17:06:46.826295       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.239.170"}
	
	
	==> kube-controller-manager [aa4fa74f54f7] <==
	I0729 17:06:46.755772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.601544ms"
	E0729 17:06:46.755792       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 17:06:46.758858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="3.043341ms"
	E0729 17:06:46.758875       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 17:06:46.762948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="9.715749ms"
	E0729 17:06:46.762967       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 17:06:46.764840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="3.176871ms"
	E0729 17:06:46.764929       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 17:06:46.766365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.377084ms"
	E0729 17:06:46.766729       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 17:06:46.772376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.546975ms"
	E0729 17:06:46.772446       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 17:06:46.792160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.689547ms"
	E0729 17:06:46.792204       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 17:06:46.798594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.400577ms"
	I0729 17:06:46.802588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.870181ms"
	I0729 17:06:46.803072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="25.157µs"
	I0729 17:06:46.807547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="23.657µs"
	I0729 17:06:46.811613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="76.012µs"
	I0729 17:06:46.843052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="10.438714ms"
	I0729 17:06:46.853053       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.975439ms"
	I0729 17:06:46.853087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="15.868µs"
	I0729 17:06:47.864936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="34.278µs"
	I0729 17:06:52.411144       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.516211ms"
	I0729 17:06:52.411711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="44.441µs"
	
	
	==> kube-controller-manager [e5621e3509aa] <==
	I0729 17:05:05.070837       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 17:05:05.070862       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 17:05:05.070891       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 17:05:05.070921       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 17:05:05.072656       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 17:05:05.072693       1 shared_informer.go:320] Caches are synced for service account
	I0729 17:05:05.072658       1 shared_informer.go:320] Caches are synced for HPA
	I0729 17:05:05.073630       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 17:05:05.073684       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.95µs"
	I0729 17:05:05.076690       1 shared_informer.go:320] Caches are synced for node
	I0729 17:05:05.076707       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0729 17:05:05.076717       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0729 17:05:05.076719       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0729 17:05:05.076721       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0729 17:05:05.078916       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 17:05:05.098600       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 17:05:05.122146       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 17:05:05.167729       1 shared_informer.go:320] Caches are synced for disruption
	I0729 17:05:05.173250       1 shared_informer.go:320] Caches are synced for deployment
	I0729 17:05:05.177463       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 17:05:05.277533       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 17:05:05.285823       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 17:05:05.683902       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 17:05:05.688071       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 17:05:05.688118       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [64788679b772] <==
	I0729 17:05:38.399817       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:05:38.415770       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 17:05:38.449972       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:05:38.450165       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:05:38.450198       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:05:38.452101       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:05:38.452216       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:05:38.452827       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:05:38.453375       1 config.go:192] "Starting service config controller"
	I0729 17:05:38.453419       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:05:38.453448       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:05:38.453477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:05:38.453871       1 config.go:319] "Starting node config controller"
	I0729 17:05:38.453894       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:05:38.553847       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 17:05:38.553875       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:05:38.553944       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d51024037960] <==
	I0729 17:04:53.037189       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:04:53.042313       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 17:04:53.051447       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:04:53.051461       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:04:53.051468       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:04:53.052074       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:04:53.052140       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:04:53.052149       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:04:53.052505       1 config.go:192] "Starting service config controller"
	I0729 17:04:53.052522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:04:53.052536       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:04:53.052539       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:04:53.052718       1 config.go:319] "Starting node config controller"
	I0729 17:04:53.052725       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:04:53.153039       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:04:53.153039       1 shared_informer.go:320] Caches are synced for node config
	I0729 17:04:53.153052       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2145d6c961ba] <==
	I0729 17:05:36.130406       1 serving.go:380] Generated self-signed cert in-memory
	W0729 17:05:37.258326       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 17:05:37.258345       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:05:37.258350       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 17:05:37.258352       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 17:05:37.282497       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 17:05:37.282615       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:05:37.283394       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 17:05:37.283449       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 17:05:37.283642       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 17:05:37.283457       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0729 17:05:37.289728       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E0729 17:05:37.289759       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	I0729 17:05:38.484198       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cdac156deb6b] <==
	I0729 17:04:50.515168       1 serving.go:380] Generated self-signed cert in-memory
	W0729 17:04:52.132128       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 17:04:52.132144       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:04:52.132150       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 17:04:52.132153       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 17:04:52.181301       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 17:04:52.186183       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:04:52.187148       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 17:04:52.187243       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 17:04:52.187272       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 17:04:52.187293       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 17:04:52.287531       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 17:05:20.807474       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 17:06:35 functional-398000 kubelet[6572]: E0729 17:06:35.303936    6572 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-z4svv_default(d3c5ffef-ed2a-478e-af88-a2dea7d41f33)\"" pod="default/hello-node-connect-6f49f58cd5-z4svv" podUID="d3c5ffef-ed2a-478e-af88-a2dea7d41f33"
	Jul 29 17:06:38 functional-398000 kubelet[6572]: I0729 17:06:38.390435    6572 topology_manager.go:215] "Topology Admit Handler" podUID="671d1ef6-f1c6-4d78-ab80-98852030666b" podNamespace="default" podName="busybox-mount"
	Jul 29 17:06:38 functional-398000 kubelet[6572]: I0729 17:06:38.513735    6572 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnrh\" (UniqueName: \"kubernetes.io/projected/671d1ef6-f1c6-4d78-ab80-98852030666b-kube-api-access-fhnrh\") pod \"busybox-mount\" (UID: \"671d1ef6-f1c6-4d78-ab80-98852030666b\") " pod="default/busybox-mount"
	Jul 29 17:06:38 functional-398000 kubelet[6572]: I0729 17:06:38.513766    6572 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/671d1ef6-f1c6-4d78-ab80-98852030666b-test-volume\") pod \"busybox-mount\" (UID: \"671d1ef6-f1c6-4d78-ab80-98852030666b\") " pod="default/busybox-mount"
	Jul 29 17:06:41 functional-398000 kubelet[6572]: I0729 17:06:41.534561    6572 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/671d1ef6-f1c6-4d78-ab80-98852030666b-test-volume\") pod \"671d1ef6-f1c6-4d78-ab80-98852030666b\" (UID: \"671d1ef6-f1c6-4d78-ab80-98852030666b\") "
	Jul 29 17:06:41 functional-398000 kubelet[6572]: I0729 17:06:41.534580    6572 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhnrh\" (UniqueName: \"kubernetes.io/projected/671d1ef6-f1c6-4d78-ab80-98852030666b-kube-api-access-fhnrh\") pod \"671d1ef6-f1c6-4d78-ab80-98852030666b\" (UID: \"671d1ef6-f1c6-4d78-ab80-98852030666b\") "
	Jul 29 17:06:41 functional-398000 kubelet[6572]: I0729 17:06:41.534599    6572 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/671d1ef6-f1c6-4d78-ab80-98852030666b-test-volume" (OuterVolumeSpecName: "test-volume") pod "671d1ef6-f1c6-4d78-ab80-98852030666b" (UID: "671d1ef6-f1c6-4d78-ab80-98852030666b"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 29 17:06:41 functional-398000 kubelet[6572]: I0729 17:06:41.536291    6572 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671d1ef6-f1c6-4d78-ab80-98852030666b-kube-api-access-fhnrh" (OuterVolumeSpecName: "kube-api-access-fhnrh") pod "671d1ef6-f1c6-4d78-ab80-98852030666b" (UID: "671d1ef6-f1c6-4d78-ab80-98852030666b"). InnerVolumeSpecName "kube-api-access-fhnrh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 17:06:41 functional-398000 kubelet[6572]: I0729 17:06:41.635589    6572 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fhnrh\" (UniqueName: \"kubernetes.io/projected/671d1ef6-f1c6-4d78-ab80-98852030666b-kube-api-access-fhnrh\") on node \"functional-398000\" DevicePath \"\""
	Jul 29 17:06:41 functional-398000 kubelet[6572]: I0729 17:06:41.635599    6572 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/671d1ef6-f1c6-4d78-ab80-98852030666b-test-volume\") on node \"functional-398000\" DevicePath \"\""
	Jul 29 17:06:42 functional-398000 kubelet[6572]: I0729 17:06:42.339992    6572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4fb26bf83a806929b7059b0d02b6bdf228175dadce408ee18933ef2ac17ea8e"
	Jul 29 17:06:45 functional-398000 kubelet[6572]: I0729 17:06:45.860102    6572 scope.go:117] "RemoveContainer" containerID="99b67d5204b7103a41cddc22a9b8462f92772705eb769ce98c69955be83d9989"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.360311    6572 scope.go:117] "RemoveContainer" containerID="99b67d5204b7103a41cddc22a9b8462f92772705eb769ce98c69955be83d9989"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.360460    6572 scope.go:117] "RemoveContainer" containerID="aeaf2470b7adbb0fb51ca2ea2fffd3aaa9dc12129d3af210d776e9026858d567"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: E0729 17:06:46.360540    6572 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-8xsw8_default(110e2cb1-e1db-4261-9072-eb3969baa002)\"" pod="default/hello-node-65f5d5cc78-8xsw8" podUID="110e2cb1-e1db-4261-9072-eb3969baa002"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.805260    6572 topology_manager.go:215] "Topology Admit Handler" podUID="925dbdf8-3160-4c51-84a9-ccc14cfd5487" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-j5d99"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: E0729 17:06:46.805299    6572 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="671d1ef6-f1c6-4d78-ab80-98852030666b" containerName="mount-munger"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.805315    6572 memory_manager.go:354] "RemoveStaleState removing state" podUID="671d1ef6-f1c6-4d78-ab80-98852030666b" containerName="mount-munger"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.842350    6572 topology_manager.go:215] "Topology Admit Handler" podUID="fd3241d1-3997-4758-bdae-e79cc06dd755" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-lt4dq"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.965457    6572 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvfk\" (UniqueName: \"kubernetes.io/projected/925dbdf8-3160-4c51-84a9-ccc14cfd5487-kube-api-access-7cvfk\") pod \"kubernetes-dashboard-779776cb65-j5d99\" (UID: \"925dbdf8-3160-4c51-84a9-ccc14cfd5487\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-j5d99"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.965508    6572 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fd3241d1-3997-4758-bdae-e79cc06dd755-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-lt4dq\" (UID: \"fd3241d1-3997-4758-bdae-e79cc06dd755\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-lt4dq"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.965524    6572 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgfb\" (UniqueName: \"kubernetes.io/projected/fd3241d1-3997-4758-bdae-e79cc06dd755-kube-api-access-lpgfb\") pod \"dashboard-metrics-scraper-b5fc48f67-lt4dq\" (UID: \"fd3241d1-3997-4758-bdae-e79cc06dd755\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-lt4dq"
	Jul 29 17:06:46 functional-398000 kubelet[6572]: I0729 17:06:46.965533    6572 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/925dbdf8-3160-4c51-84a9-ccc14cfd5487-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-j5d99\" (UID: \"925dbdf8-3160-4c51-84a9-ccc14cfd5487\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-j5d99"
	Jul 29 17:06:47 functional-398000 kubelet[6572]: I0729 17:06:47.860551    6572 scope.go:117] "RemoveContainer" containerID="adc298b84bb81bc2c4ef934be781bd30ef56954519071464062220e9a1a7afc1"
	Jul 29 17:06:47 functional-398000 kubelet[6572]: E0729 17:06:47.860641    6572 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-z4svv_default(d3c5ffef-ed2a-478e-af88-a2dea7d41f33)\"" pod="default/hello-node-connect-6f49f58cd5-z4svv" podUID="d3c5ffef-ed2a-478e-af88-a2dea7d41f33"
	
	
	==> storage-provisioner [bffaee85a515] <==
	I0729 17:04:52.991174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 17:04:52.998954       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 17:04:52.998971       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 17:05:10.385354       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 17:05:10.385590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56bede8b-8dd0-4338-b435-3367534ba9d1", APIVersion:"v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-398000_54ab05ad-d86e-4e28-99e3-cd31dbd00569 became leader
	I0729 17:05:10.386436       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-398000_54ab05ad-d86e-4e28-99e3-cd31dbd00569!
	I0729 17:05:10.486522       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-398000_54ab05ad-d86e-4e28-99e3-cd31dbd00569!
	
	
	==> storage-provisioner [e4e14d1760fb] <==
	I0729 17:05:38.367512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 17:05:38.373620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 17:05:38.373662       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 17:05:55.758903       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 17:05:55.758974       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-398000_771478dc-0052-48f6-a306-70bb339cf67a!
	I0729 17:05:55.759125       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56bede8b-8dd0-4338-b435-3367534ba9d1", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-398000_771478dc-0052-48f6-a306-70bb339cf67a became leader
	I0729 17:05:55.859565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-398000_771478dc-0052-48f6-a306-70bb339cf67a!
	I0729 17:06:09.447459       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0729 17:06:09.447490       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e7bd7f65-a890-4127-b326-1bf4a09653f5 350 0 2024-07-29 17:04:31 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-29 17:04:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f79aef22-a5f3-4c1b-84b6-412574204da9 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f79aef22-a5f3-4c1b-84b6-412574204da9 646 0 2024-07-29 17:06:09 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-29 17:06:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-29 17:06:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0729 17:06:09.447913       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f79aef22-a5f3-4c1b-84b6-412574204da9" provisioned
	I0729 17:06:09.447924       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0729 17:06:09.447926       1 volume_store.go:212] Trying to save persistentvolume "pvc-f79aef22-a5f3-4c1b-84b6-412574204da9"
	I0729 17:06:09.448690       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f79aef22-a5f3-4c1b-84b6-412574204da9", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0729 17:06:09.454458       1 volume_store.go:219] persistentvolume "pvc-f79aef22-a5f3-4c1b-84b6-412574204da9" saved
	I0729 17:06:09.454817       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f79aef22-a5f3-4c1b-84b6-412574204da9", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f79aef22-a5f3-4c1b-84b6-412574204da9
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-398000 -n functional-398000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-398000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-lt4dq
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-398000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-lt4dq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-398000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-lt4dq: exit status 1 (40.0235ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-398000/192.168.105.4
	Start Time:       Mon, 29 Jul 2024 10:06:38 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://dfae690ff2b8bcfc4646a97c6a3269c57a4af46464c88bccaa4105df1af61643
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Jul 2024 10:06:39 -0700
	      Finished:     Mon, 29 Jul 2024 10:06:39 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fhnrh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fhnrh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/busybox-mount to functional-398000
	  Normal  Pulling    14s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     13s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.057s (1.057s including waiting). Image size: 3547125 bytes.
	  Normal  Created    13s   kubelet            Created container mount-munger
	  Normal  Started    13s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-lt4dq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-398000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-lt4dq: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (38.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-011000 node stop m02 -v=7 --alsologtostderr: (12.191959708s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr
E0729 10:11:44.605624    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:12:25.566595    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:13:47.486300    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:14:17.493209    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr: exit status 7 (2m55.967837s)

                                                
                                                
-- stdout --
	ha-011000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-011000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-011000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:11:41.372886    3008 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:11:41.373051    3008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:11:41.373054    3008 out.go:304] Setting ErrFile to fd 2...
	I0729 10:11:41.373056    3008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:11:41.373218    3008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:11:41.373331    3008 out.go:298] Setting JSON to false
	I0729 10:11:41.373344    3008 mustload.go:65] Loading cluster: ha-011000
	I0729 10:11:41.373380    3008 notify.go:220] Checking for updates...
	I0729 10:11:41.373580    3008 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:11:41.373591    3008 status.go:255] checking status of ha-011000 ...
	I0729 10:11:41.374286    3008 status.go:330] ha-011000 host status = "Running" (err=<nil>)
	I0729 10:11:41.374294    3008 host.go:66] Checking if "ha-011000" exists ...
	I0729 10:11:41.374390    3008 host.go:66] Checking if "ha-011000" exists ...
	I0729 10:11:41.374506    3008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:11:41.374514    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/id_rsa Username:docker}
	W0729 10:12:07.298072    3008 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0729 10:12:07.298227    3008 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 10:12:07.298254    3008 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 10:12:07.298265    3008 status.go:257] ha-011000 status: &{Name:ha-011000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:12:07.298286    3008 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 10:12:07.298295    3008 status.go:255] checking status of ha-011000-m02 ...
	I0729 10:12:07.298686    3008 status.go:330] ha-011000-m02 host status = "Stopped" (err=<nil>)
	I0729 10:12:07.298696    3008 status.go:343] host is not running, skipping remaining checks
	I0729 10:12:07.298702    3008 status.go:257] ha-011000-m02 status: &{Name:ha-011000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:12:07.298716    3008 status.go:255] checking status of ha-011000-m03 ...
	I0729 10:12:07.299901    3008 status.go:330] ha-011000-m03 host status = "Running" (err=<nil>)
	I0729 10:12:07.299911    3008 host.go:66] Checking if "ha-011000-m03" exists ...
	I0729 10:12:07.300137    3008 host.go:66] Checking if "ha-011000-m03" exists ...
	I0729 10:12:07.300392    3008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:12:07.300406    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m03/id_rsa Username:docker}
	W0729 10:13:22.299864    3008 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 10:13:22.299927    3008 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0729 10:13:22.299937    3008 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 10:13:22.299942    3008 status.go:257] ha-011000-m03 status: &{Name:ha-011000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:13:22.299952    3008 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 10:13:22.299958    3008 status.go:255] checking status of ha-011000-m04 ...
	I0729 10:13:22.300790    3008 status.go:330] ha-011000-m04 host status = "Running" (err=<nil>)
	I0729 10:13:22.300800    3008 host.go:66] Checking if "ha-011000-m04" exists ...
	I0729 10:13:22.300924    3008 host.go:66] Checking if "ha-011000-m04" exists ...
	I0729 10:13:22.301068    3008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:13:22.301074    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m04/id_rsa Username:docker}
	W0729 10:14:37.301135    3008 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 10:14:37.301186    3008 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0729 10:14:37.301194    3008 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 10:14:37.301198    3008 status.go:257] ha-011000-m04 status: &{Name:ha-011000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:14:37.301206    3008 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr": ha-011000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-011000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-011000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr": ha-011000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-011000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-011000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr": ha-011000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-011000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-011000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 3 (25.957151167s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 10:15:03.258703    3040 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 10:15:03.258716    3040 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (126.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0729 10:16:03.622494    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:16:31.226721    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m40.986211s)
ha_test.go:413: expected profile "ha-011000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-011000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-011000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-011000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 3 (26.00013225s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 10:17:10.144572    3072 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 10:17:10.144582    3072 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (126.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (184.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-011000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.082501375s)

                                                
                                                
-- stdout --
	* Starting "ha-011000-m02" control-plane node in "ha-011000" cluster
	* Restarting existing qemu2 VM for "ha-011000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-011000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:17:10.177480    3078 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:17:10.177722    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:17:10.177727    3078 out.go:304] Setting ErrFile to fd 2...
	I0729 10:17:10.177729    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:17:10.177860    3078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:17:10.178095    3078 mustload.go:65] Loading cluster: ha-011000
	I0729 10:17:10.178316    3078 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 10:17:10.178529    3078 host.go:58] "ha-011000-m02" host status: Stopped
	I0729 10:17:10.183151    3078 out.go:177] * Starting "ha-011000-m02" control-plane node in "ha-011000" cluster
	I0729 10:17:10.187147    3078 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:17:10.187166    3078 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:17:10.187174    3078 cache.go:56] Caching tarball of preloaded images
	I0729 10:17:10.187242    3078 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:17:10.187249    3078 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:17:10.187300    3078 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/ha-011000/config.json ...
	I0729 10:17:10.187658    3078 start.go:360] acquireMachinesLock for ha-011000-m02: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:17:10.187702    3078 start.go:364] duration metric: took 30.625µs to acquireMachinesLock for "ha-011000-m02"
	I0729 10:17:10.187710    3078 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:17:10.187715    3078 fix.go:54] fixHost starting: m02
	I0729 10:17:10.187811    3078 fix.go:112] recreateIfNeeded on ha-011000-m02: state=Stopped err=<nil>
	W0729 10:17:10.187816    3078 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:17:10.191089    3078 out.go:177] * Restarting existing qemu2 VM for "ha-011000-m02" ...
	I0729 10:17:10.195107    3078 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:17:10.195147    3078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:63:86:9c:9f:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/disk.qcow2
	I0729 10:17:10.197725    3078 main.go:141] libmachine: STDOUT: 
	I0729 10:17:10.197743    3078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:17:10.197768    3078 fix.go:56] duration metric: took 10.052167ms for fixHost
	I0729 10:17:10.197772    3078 start.go:83] releasing machines lock for "ha-011000-m02", held for 10.067541ms
	W0729 10:17:10.197778    3078 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:17:10.197806    3078 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:17:10.197810    3078 start.go:729] Will try again in 5 seconds ...
	I0729 10:17:15.199259    3078 start.go:360] acquireMachinesLock for ha-011000-m02: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:17:15.199379    3078 start.go:364] duration metric: took 101.75µs to acquireMachinesLock for "ha-011000-m02"
	I0729 10:17:15.199430    3078 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:17:15.199435    3078 fix.go:54] fixHost starting: m02
	I0729 10:17:15.199616    3078 fix.go:112] recreateIfNeeded on ha-011000-m02: state=Stopped err=<nil>
	W0729 10:17:15.199622    3078 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:17:15.203461    3078 out.go:177] * Restarting existing qemu2 VM for "ha-011000-m02" ...
	I0729 10:17:15.206412    3078 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:17:15.206457    3078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:63:86:9c:9f:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/disk.qcow2
	I0729 10:17:15.208918    3078 main.go:141] libmachine: STDOUT: 
	I0729 10:17:15.208936    3078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:17:15.208963    3078 fix.go:56] duration metric: took 9.528458ms for fixHost
	I0729 10:17:15.208968    3078 start.go:83] releasing machines lock for "ha-011000-m02", held for 9.583584ms
	W0729 10:17:15.209005    3078 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:17:15.213442    3078 out.go:177] 
	W0729 10:17:15.217552    3078 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:17:15.217556    3078 out.go:239] * 
	* 
	W0729 10:17:15.219263    3078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:17:15.223412    3078 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 10:17:10.177480    3078 out.go:291] Setting OutFile to fd 1 ...
I0729 10:17:10.177722    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:17:10.177727    3078 out.go:304] Setting ErrFile to fd 2...
I0729 10:17:10.177729    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:17:10.177860    3078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
I0729 10:17:10.178095    3078 mustload.go:65] Loading cluster: ha-011000
I0729 10:17:10.178316    3078 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0729 10:17:10.178529    3078 host.go:58] "ha-011000-m02" host status: Stopped
I0729 10:17:10.183151    3078 out.go:177] * Starting "ha-011000-m02" control-plane node in "ha-011000" cluster
I0729 10:17:10.187147    3078 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 10:17:10.187166    3078 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 10:17:10.187174    3078 cache.go:56] Caching tarball of preloaded images
I0729 10:17:10.187242    3078 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 10:17:10.187249    3078 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 10:17:10.187300    3078 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/ha-011000/config.json ...
I0729 10:17:10.187658    3078 start.go:360] acquireMachinesLock for ha-011000-m02: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 10:17:10.187702    3078 start.go:364] duration metric: took 30.625µs to acquireMachinesLock for "ha-011000-m02"
I0729 10:17:10.187710    3078 start.go:96] Skipping create...Using existing machine configuration
I0729 10:17:10.187715    3078 fix.go:54] fixHost starting: m02
I0729 10:17:10.187811    3078 fix.go:112] recreateIfNeeded on ha-011000-m02: state=Stopped err=<nil>
W0729 10:17:10.187816    3078 fix.go:138] unexpected machine state, will restart: <nil>
I0729 10:17:10.191089    3078 out.go:177] * Restarting existing qemu2 VM for "ha-011000-m02" ...
I0729 10:17:10.195107    3078 qemu.go:418] Using hvf for hardware acceleration
I0729 10:17:10.195147    3078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:63:86:9c:9f:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/disk.qcow2
I0729 10:17:10.197725    3078 main.go:141] libmachine: STDOUT: 
I0729 10:17:10.197743    3078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 10:17:10.197768    3078 fix.go:56] duration metric: took 10.052167ms for fixHost
I0729 10:17:10.197772    3078 start.go:83] releasing machines lock for "ha-011000-m02", held for 10.067541ms
W0729 10:17:10.197778    3078 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 10:17:10.197806    3078 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 10:17:10.197810    3078 start.go:729] Will try again in 5 seconds ...
I0729 10:17:15.199259    3078 start.go:360] acquireMachinesLock for ha-011000-m02: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 10:17:15.199379    3078 start.go:364] duration metric: took 101.75µs to acquireMachinesLock for "ha-011000-m02"
I0729 10:17:15.199430    3078 start.go:96] Skipping create...Using existing machine configuration
I0729 10:17:15.199435    3078 fix.go:54] fixHost starting: m02
I0729 10:17:15.199616    3078 fix.go:112] recreateIfNeeded on ha-011000-m02: state=Stopped err=<nil>
W0729 10:17:15.199622    3078 fix.go:138] unexpected machine state, will restart: <nil>
I0729 10:17:15.203461    3078 out.go:177] * Restarting existing qemu2 VM for "ha-011000-m02" ...
I0729 10:17:15.206412    3078 qemu.go:418] Using hvf for hardware acceleration
I0729 10:17:15.206457    3078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:63:86:9c:9f:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m02/disk.qcow2
I0729 10:17:15.208918    3078 main.go:141] libmachine: STDOUT: 
I0729 10:17:15.208936    3078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 10:17:15.208963    3078 fix.go:56] duration metric: took 9.528458ms for fixHost
I0729 10:17:15.208968    3078 start.go:83] releasing machines lock for "ha-011000-m02", held for 9.583584ms
W0729 10:17:15.209005    3078 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 10:17:15.213442    3078 out.go:177] 
W0729 10:17:15.217552    3078 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 10:17:15.217556    3078 out.go:239] * 
* 
W0729 10:17:15.219263    3078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 10:17:15.223412    3078 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-011000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr
E0729 10:19:17.384452    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr: exit status 7 (2m33.014763667s)

                                                
                                                
-- stdout --
	ha-011000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-011000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-011000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:17:15.260577    3082 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:17:15.260715    3082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:17:15.260719    3082 out.go:304] Setting ErrFile to fd 2...
	I0729 10:17:15.260722    3082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:17:15.260862    3082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:17:15.260987    3082 out.go:298] Setting JSON to false
	I0729 10:17:15.261004    3082 mustload.go:65] Loading cluster: ha-011000
	I0729 10:17:15.261058    3082 notify.go:220] Checking for updates...
	I0729 10:17:15.261230    3082 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:17:15.261240    3082 status.go:255] checking status of ha-011000 ...
	I0729 10:17:15.261985    3082 status.go:330] ha-011000 host status = "Running" (err=<nil>)
	I0729 10:17:15.261993    3082 host.go:66] Checking if "ha-011000" exists ...
	I0729 10:17:15.262098    3082 host.go:66] Checking if "ha-011000" exists ...
	I0729 10:17:15.262208    3082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:17:15.262216    3082 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/id_rsa Username:docker}
	W0729 10:17:15.262384    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:15.262400    3082 retry.go:31] will retry after 338.221477ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 10:17:15.602900    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:15.602938    3082 retry.go:31] will retry after 355.630621ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 10:17:15.960755    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:15.960777    3082 retry.go:31] will retry after 767.733434ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 10:17:16.730629    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:16.730688    3082 retry.go:31] will retry after 188.737517ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:16.921478    3082 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/id_rsa Username:docker}
	W0729 10:17:16.921783    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:16.921795    3082 retry.go:31] will retry after 371.027968ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 10:17:17.294978    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:17.294999    3082 retry.go:31] will retry after 501.43014ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 10:17:17.798565    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:17.798585    3082 retry.go:31] will retry after 435.876893ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 10:17:18.236612    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	W0729 10:17:18.236652    3082 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	E0729 10:17:18.236660    3082 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:18.236664    3082 status.go:257] ha-011000 status: &{Name:ha-011000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:17:18.236680    3082 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0729 10:17:18.236686    3082 status.go:255] checking status of ha-011000-m02 ...
	I0729 10:17:18.236857    3082 status.go:330] ha-011000-m02 host status = "Stopped" (err=<nil>)
	I0729 10:17:18.236861    3082 status.go:343] host is not running, skipping remaining checks
	I0729 10:17:18.236864    3082 status.go:257] ha-011000-m02 status: &{Name:ha-011000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:17:18.236869    3082 status.go:255] checking status of ha-011000-m03 ...
	I0729 10:17:18.237473    3082 status.go:330] ha-011000-m03 host status = "Running" (err=<nil>)
	I0729 10:17:18.237480    3082 host.go:66] Checking if "ha-011000-m03" exists ...
	I0729 10:17:18.237587    3082 host.go:66] Checking if "ha-011000-m03" exists ...
	I0729 10:17:18.237722    3082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:17:18.237728    3082 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m03/id_rsa Username:docker}
	W0729 10:18:33.235857    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 10:18:33.235898    3082 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0729 10:18:33.235905    3082 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 10:18:33.235909    3082 status.go:257] ha-011000-m03 status: &{Name:ha-011000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:18:33.235933    3082 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 10:18:33.235938    3082 status.go:255] checking status of ha-011000-m04 ...
	I0729 10:18:33.236643    3082 status.go:330] ha-011000-m04 host status = "Running" (err=<nil>)
	I0729 10:18:33.236650    3082 host.go:66] Checking if "ha-011000-m04" exists ...
	I0729 10:18:33.236760    3082 host.go:66] Checking if "ha-011000-m04" exists ...
	I0729 10:18:33.236884    3082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:18:33.236890    3082 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000-m04/id_rsa Username:docker}
	W0729 10:19:48.234773    3082 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 10:19:48.234818    3082 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0729 10:19:48.234827    3082 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 10:19:48.234831    3082 status.go:257] ha-011000-m04 status: &{Name:ha-011000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 10:19:48.234841    3082 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 3 (25.9602365s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 10:20:14.194198    3112 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 10:20:14.194212    3112 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (184.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-011000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-011000 -v=7 --alsologtostderr
E0729 10:24:17.369647    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-011000 -v=7 --alsologtostderr: (3m49.023236916s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-011000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-011000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.216966s)

                                                
                                                
-- stdout --
	* [ha-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-011000" primary control-plane node in "ha-011000" cluster
	* Restarting existing qemu2 VM for "ha-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:25:21.613863    3231 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:25:21.614028    3231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:21.614033    3231 out.go:304] Setting ErrFile to fd 2...
	I0729 10:25:21.614036    3231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:21.614191    3231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:25:21.615337    3231 out.go:298] Setting JSON to false
	I0729 10:25:21.635426    3231 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3285,"bootTime":1722270636,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:25:21.635497    3231 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:25:21.639200    3231 out.go:177] * [ha-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:25:21.647238    3231 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:25:21.647283    3231 notify.go:220] Checking for updates...
	I0729 10:25:21.654335    3231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:25:21.657159    3231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:25:21.660213    3231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:25:21.663187    3231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:25:21.664434    3231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:25:21.667543    3231 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:25:21.667603    3231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:25:21.672212    3231 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:25:21.677186    3231 start.go:297] selected driver: qemu2
	I0729 10:25:21.677194    3231 start.go:901] validating driver "qemu2" against &{Name:ha-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-011000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:25:21.677286    3231 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:25:21.679953    3231 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:25:21.679998    3231 cni.go:84] Creating CNI manager for ""
	I0729 10:25:21.680002    3231 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 10:25:21.680052    3231 start.go:340] cluster config:
	{Name:ha-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-011000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:25:21.684082    3231 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:25:21.692164    3231 out.go:177] * Starting "ha-011000" primary control-plane node in "ha-011000" cluster
	I0729 10:25:21.696197    3231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:25:21.696213    3231 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:25:21.696223    3231 cache.go:56] Caching tarball of preloaded images
	I0729 10:25:21.696294    3231 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:25:21.696301    3231 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:25:21.696368    3231 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/ha-011000/config.json ...
	I0729 10:25:21.696771    3231 start.go:360] acquireMachinesLock for ha-011000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:25:21.696804    3231 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "ha-011000"
	I0729 10:25:21.696814    3231 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:25:21.696819    3231 fix.go:54] fixHost starting: 
	I0729 10:25:21.696936    3231 fix.go:112] recreateIfNeeded on ha-011000: state=Stopped err=<nil>
	W0729 10:25:21.696944    3231 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:25:21.701177    3231 out.go:177] * Restarting existing qemu2 VM for "ha-011000" ...
	I0729 10:25:21.709252    3231 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:25:21.709293    3231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:01:0f:42:c2:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/disk.qcow2
	I0729 10:25:21.711317    3231 main.go:141] libmachine: STDOUT: 
	I0729 10:25:21.711335    3231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:25:21.711364    3231 fix.go:56] duration metric: took 14.545458ms for fixHost
	I0729 10:25:21.711368    3231 start.go:83] releasing machines lock for "ha-011000", held for 14.560291ms
	W0729 10:25:21.711375    3231 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:25:21.711404    3231 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:25:21.711409    3231 start.go:729] Will try again in 5 seconds ...
	I0729 10:25:26.713356    3231 start.go:360] acquireMachinesLock for ha-011000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:25:26.713719    3231 start.go:364] duration metric: took 270.417µs to acquireMachinesLock for "ha-011000"
	I0729 10:25:26.713844    3231 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:25:26.713860    3231 fix.go:54] fixHost starting: 
	I0729 10:25:26.714533    3231 fix.go:112] recreateIfNeeded on ha-011000: state=Stopped err=<nil>
	W0729 10:25:26.714559    3231 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:25:26.718962    3231 out.go:177] * Restarting existing qemu2 VM for "ha-011000" ...
	I0729 10:25:26.723914    3231 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:25:26.724187    3231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:01:0f:42:c2:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/disk.qcow2
	I0729 10:25:26.732841    3231 main.go:141] libmachine: STDOUT: 
	I0729 10:25:26.732900    3231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:25:26.732963    3231 fix.go:56] duration metric: took 19.106167ms for fixHost
	I0729 10:25:26.732977    3231 start.go:83] releasing machines lock for "ha-011000", held for 19.238958ms
	W0729 10:25:26.733170    3231 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:25:26.739860    3231 out.go:177] 
	W0729 10:25:26.743991    3231 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:25:26.744031    3231 out.go:239] * 
	* 
	W0729 10:25:26.746639    3231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:25:26.752953    3231 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-011000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-011000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 7 (33.195458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-011000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.148458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-011000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-011000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:25:26.896933    3243 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:25:26.897184    3243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:26.897187    3243 out.go:304] Setting ErrFile to fd 2...
	I0729 10:25:26.897189    3243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:26.897306    3243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:25:26.897520    3243 mustload.go:65] Loading cluster: ha-011000
	I0729 10:25:26.897750    3243 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 10:25:26.898064    3243 out.go:239] ! The control-plane node ha-011000 host is not running (will try others): state=Stopped
	! The control-plane node ha-011000 host is not running (will try others): state=Stopped
	W0729 10:25:26.898174    3243 out.go:239] ! The control-plane node ha-011000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-011000-m02 host is not running (will try others): state=Stopped
	I0729 10:25:26.902851    3243 out.go:177] * The control-plane node ha-011000-m03 host is not running: state=Stopped
	I0729 10:25:26.905856    3243 out.go:177]   To start a cluster, run: "minikube start -p ha-011000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-011000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr: exit status 7 (28.976875ms)

                                                
                                                
-- stdout --
	ha-011000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:25:26.936598    3245 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:25:26.936725    3245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:26.936730    3245 out.go:304] Setting ErrFile to fd 2...
	I0729 10:25:26.936733    3245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:26.936860    3245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:25:26.936972    3245 out.go:298] Setting JSON to false
	I0729 10:25:26.936982    3245 mustload.go:65] Loading cluster: ha-011000
	I0729 10:25:26.937038    3245 notify.go:220] Checking for updates...
	I0729 10:25:26.937226    3245 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:25:26.937233    3245 status.go:255] checking status of ha-011000 ...
	I0729 10:25:26.937441    3245 status.go:330] ha-011000 host status = "Stopped" (err=<nil>)
	I0729 10:25:26.937445    3245 status.go:343] host is not running, skipping remaining checks
	I0729 10:25:26.937447    3245 status.go:257] ha-011000 status: &{Name:ha-011000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:25:26.937457    3245 status.go:255] checking status of ha-011000-m02 ...
	I0729 10:25:26.937558    3245 status.go:330] ha-011000-m02 host status = "Stopped" (err=<nil>)
	I0729 10:25:26.937561    3245 status.go:343] host is not running, skipping remaining checks
	I0729 10:25:26.937563    3245 status.go:257] ha-011000-m02 status: &{Name:ha-011000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:25:26.937567    3245 status.go:255] checking status of ha-011000-m03 ...
	I0729 10:25:26.937653    3245 status.go:330] ha-011000-m03 host status = "Stopped" (err=<nil>)
	I0729 10:25:26.937656    3245 status.go:343] host is not running, skipping remaining checks
	I0729 10:25:26.937658    3245 status.go:257] ha-011000-m03 status: &{Name:ha-011000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:25:26.937663    3245 status.go:255] checking status of ha-011000-m04 ...
	I0729 10:25:26.937759    3245 status.go:330] ha-011000-m04 host status = "Stopped" (err=<nil>)
	I0729 10:25:26.937761    3245 status.go:343] host is not running, skipping remaining checks
	I0729 10:25:26.937763    3245 status.go:257] ha-011000-m04 status: &{Name:ha-011000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 7 (28.99725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-011000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-011000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-011000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-011000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 7 (56.499542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 stop -v=7 --alsologtostderr
E0729 10:26:03.497971    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:27:26.558021    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-011000 stop -v=7 --alsologtostderr: (3m21.989467042s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr: exit status 7 (65.343125ms)

                                                
                                                
-- stdout --
	ha-011000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:28:50.046875    3337 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:28:50.047102    3337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:28:50.047106    3337 out.go:304] Setting ErrFile to fd 2...
	I0729 10:28:50.047109    3337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:28:50.047294    3337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:28:50.047453    3337 out.go:298] Setting JSON to false
	I0729 10:28:50.047466    3337 mustload.go:65] Loading cluster: ha-011000
	I0729 10:28:50.047496    3337 notify.go:220] Checking for updates...
	I0729 10:28:50.047760    3337 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:28:50.047771    3337 status.go:255] checking status of ha-011000 ...
	I0729 10:28:50.048023    3337 status.go:330] ha-011000 host status = "Stopped" (err=<nil>)
	I0729 10:28:50.048027    3337 status.go:343] host is not running, skipping remaining checks
	I0729 10:28:50.048030    3337 status.go:257] ha-011000 status: &{Name:ha-011000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:28:50.048043    3337 status.go:255] checking status of ha-011000-m02 ...
	I0729 10:28:50.048163    3337 status.go:330] ha-011000-m02 host status = "Stopped" (err=<nil>)
	I0729 10:28:50.048167    3337 status.go:343] host is not running, skipping remaining checks
	I0729 10:28:50.048169    3337 status.go:257] ha-011000-m02 status: &{Name:ha-011000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:28:50.048174    3337 status.go:255] checking status of ha-011000-m03 ...
	I0729 10:28:50.048293    3337 status.go:330] ha-011000-m03 host status = "Stopped" (err=<nil>)
	I0729 10:28:50.048296    3337 status.go:343] host is not running, skipping remaining checks
	I0729 10:28:50.048298    3337 status.go:257] ha-011000-m03 status: &{Name:ha-011000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:28:50.048305    3337 status.go:255] checking status of ha-011000-m04 ...
	I0729 10:28:50.048418    3337 status.go:330] ha-011000-m04 host status = "Stopped" (err=<nil>)
	I0729 10:28:50.048422    3337 status.go:343] host is not running, skipping remaining checks
	I0729 10:28:50.048424    3337 status.go:257] ha-011000-m04 status: &{Name:ha-011000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr": ha-011000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr": ha-011000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr": ha-011000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-011000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 7 (32.293625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-011000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-011000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.17933025s)

                                                
                                                
-- stdout --
	* [ha-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-011000" primary control-plane node in "ha-011000" cluster
	* Restarting existing qemu2 VM for "ha-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:28:50.109463    3341 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:28:50.109694    3341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:28:50.109698    3341 out.go:304] Setting ErrFile to fd 2...
	I0729 10:28:50.109700    3341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:28:50.109838    3341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:28:50.111014    3341 out.go:298] Setting JSON to false
	I0729 10:28:50.127060    3341 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3494,"bootTime":1722270636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:28:50.127134    3341 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:28:50.132302    3341 out.go:177] * [ha-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:28:50.139155    3341 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:28:50.139204    3341 notify.go:220] Checking for updates...
	I0729 10:28:50.146248    3341 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:28:50.149164    3341 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:28:50.152221    3341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:28:50.155213    3341 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:28:50.156561    3341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:28:50.159535    3341 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:28:50.159793    3341 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:28:50.164238    3341 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:28:50.169204    3341 start.go:297] selected driver: qemu2
	I0729 10:28:50.169211    3341 start.go:901] validating driver "qemu2" against &{Name:ha-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-011000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:28:50.169280    3341 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:28:50.171393    3341 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:28:50.171432    3341 cni.go:84] Creating CNI manager for ""
	I0729 10:28:50.171440    3341 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 10:28:50.171491    3341 start.go:340] cluster config:
	{Name:ha-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-011000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:28:50.174831    3341 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:28:50.183212    3341 out.go:177] * Starting "ha-011000" primary control-plane node in "ha-011000" cluster
	I0729 10:28:50.187193    3341 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:28:50.187208    3341 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:28:50.187216    3341 cache.go:56] Caching tarball of preloaded images
	I0729 10:28:50.187273    3341 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:28:50.187279    3341 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:28:50.187358    3341 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/ha-011000/config.json ...
	I0729 10:28:50.187767    3341 start.go:360] acquireMachinesLock for ha-011000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:28:50.187803    3341 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "ha-011000"
	I0729 10:28:50.187813    3341 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:28:50.187819    3341 fix.go:54] fixHost starting: 
	I0729 10:28:50.187942    3341 fix.go:112] recreateIfNeeded on ha-011000: state=Stopped err=<nil>
	W0729 10:28:50.187951    3341 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:28:50.192206    3341 out.go:177] * Restarting existing qemu2 VM for "ha-011000" ...
	I0729 10:28:50.200242    3341 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:28:50.200278    3341 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:01:0f:42:c2:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/disk.qcow2
	I0729 10:28:50.202343    3341 main.go:141] libmachine: STDOUT: 
	I0729 10:28:50.202366    3341 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:28:50.202397    3341 fix.go:56] duration metric: took 14.577667ms for fixHost
	I0729 10:28:50.202402    3341 start.go:83] releasing machines lock for "ha-011000", held for 14.595917ms
	W0729 10:28:50.202410    3341 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:28:50.202451    3341 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:28:50.202456    3341 start.go:729] Will try again in 5 seconds ...
	I0729 10:28:55.204414    3341 start.go:360] acquireMachinesLock for ha-011000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:28:55.204976    3341 start.go:364] duration metric: took 324.167µs to acquireMachinesLock for "ha-011000"
	I0729 10:28:55.205139    3341 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:28:55.205164    3341 fix.go:54] fixHost starting: 
	I0729 10:28:55.205907    3341 fix.go:112] recreateIfNeeded on ha-011000: state=Stopped err=<nil>
	W0729 10:28:55.205936    3341 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:28:55.210528    3341 out.go:177] * Restarting existing qemu2 VM for "ha-011000" ...
	I0729 10:28:55.218369    3341 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:28:55.218623    3341 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:01:0f:42:c2:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/ha-011000/disk.qcow2
	I0729 10:28:55.228306    3341 main.go:141] libmachine: STDOUT: 
	I0729 10:28:55.228366    3341 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:28:55.228473    3341 fix.go:56] duration metric: took 23.313459ms for fixHost
	I0729 10:28:55.228492    3341 start.go:83] releasing machines lock for "ha-011000", held for 23.494583ms
	W0729 10:28:55.228665    3341 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:28:55.235364    3341 out.go:177] 
	W0729 10:28:55.239439    3341 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:28:55.239461    3341 out.go:239] * 
	* 
	W0729 10:28:55.241958    3341 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:28:55.253322    3341 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-011000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 7 (70.2265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-011000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-011000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-011000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-011000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 7 (28.785958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-011000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-011000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.679ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-011000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-011000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:28:55.439937    3360 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:28:55.440102    3360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:28:55.440105    3360 out.go:304] Setting ErrFile to fd 2...
	I0729 10:28:55.440107    3360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:28:55.440231    3360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:28:55.440495    3360 mustload.go:65] Loading cluster: ha-011000
	I0729 10:28:55.440711    3360 config.go:182] Loaded profile config "ha-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 10:28:55.441029    3360 out.go:239] ! The control-plane node ha-011000 host is not running (will try others): state=Stopped
	! The control-plane node ha-011000 host is not running (will try others): state=Stopped
	W0729 10:28:55.441136    3360 out.go:239] ! The control-plane node ha-011000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-011000-m02 host is not running (will try others): state=Stopped
	I0729 10:28:55.445040    3360 out.go:177] * The control-plane node ha-011000-m03 host is not running: state=Stopped
	I0729 10:28:55.449012    3360 out.go:177]   To start a cluster, run: "minikube start -p ha-011000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-011000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-011000 -n ha-011000: exit status 7 (29.13825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-958000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-958000 --driver=qemu2 : exit status 80 (9.912360542s)

                                                
                                                
-- stdout --
	* [image-958000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-958000" primary control-plane node in "image-958000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-958000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-958000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-958000 -n image-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-958000 -n image-958000: exit status 7 (67.764125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-958000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-137000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-137000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.809461917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aafd023f-153d-4813-b753-74d09eeda371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-137000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"17a3f36e-dc3e-45b6-85a0-584781af40a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19345"}}
	{"specversion":"1.0","id":"e525f2b3-1a58-42b2-aa56-ed5c1a70a95f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig"}}
	{"specversion":"1.0","id":"be024cb5-765f-4b4b-83d8-6776d5e80c4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c4c6fb00-6e73-4cd0-b5a9-fcfbd46c66d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"46c0e880-3ee7-4c2a-a124-356d282e4432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube"}}
	{"specversion":"1.0","id":"1c9bbf75-8e08-4984-838b-5906dc352cbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5fa7363a-6495-4af1-8e34-cbc2cb6d8aa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9332eead-4fde-41d0-84ce-c9104660ca7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"224b2cd1-8300-48aa-ba2e-2a1bd76622fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-137000\" primary control-plane node in \"json-output-137000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4860811d-2ef0-4b0f-b410-c8e722675e6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"428ec8ca-2ee3-4d32-921b-934bcea06451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-137000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"04cb4e87-fee4-4f3f-97d0-75f96f2c22b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"30b9457e-cc1d-4668-a2ab-391fe2268dc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"fbae713e-ed3f-4acb-889e-96873b5e2555","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-137000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fbce05a6-197f-4d5a-8a56-99336f54a192","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"cc4e5dd6-898e-4735-8244-9b7742f3d27e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-137000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-137000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-137000 --output=json --user=testUser: exit status 83 (76.292333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1ecf8ddb-aa1b-4123-afee-e82280394189","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-137000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"0e64d3ef-a232-470f-ab68-784b6ac5a401","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-137000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-137000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-137000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-137000 --output=json --user=testUser: exit status 83 (43.198333ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-137000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-137000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-137000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-137000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-913000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-913000 --driver=qemu2 : exit status 80 (9.7165125s)

                                                
                                                
-- stdout --
	* [first-913000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-913000" primary control-plane node in "first-913000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-913000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-913000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-913000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 10:29:29.423531 -0700 PDT m=+2066.239557918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-915000 -n second-915000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-915000 -n second-915000: exit status 85 (78.063542ms)

                                                
                                                
-- stdout --
	* Profile "second-915000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-915000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-915000" host is not running, skipping log retrieval (state="* Profile \"second-915000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-915000\"")
helpers_test.go:175: Cleaning up "second-915000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-915000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 10:29:29.607836 -0700 PDT m=+2066.423871168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-913000 -n first-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-913000 -n first-913000: exit status 7 (29.263583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-913000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-913000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-913000
--- FAIL: TestMinikubeProfile (10.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-302000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-302000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.77913875s)

                                                
                                                
-- stdout --
	* [mount-start-1-302000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-302000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-302000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-302000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-302000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-302000 -n mount-start-1-302000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-302000 -n mount-start-1-302000: exit status 7 (69.213583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-302000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-937000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-937000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.767949917s)

                                                
                                                
-- stdout --
	* [multinode-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-937000" primary control-plane node in "multinode-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:29:39.779688    3513 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:29:39.779823    3513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:29:39.779826    3513 out.go:304] Setting ErrFile to fd 2...
	I0729 10:29:39.779829    3513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:29:39.779991    3513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:29:39.781033    3513 out.go:298] Setting JSON to false
	I0729 10:29:39.796978    3513 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3543,"bootTime":1722270636,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:29:39.797075    3513 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:29:39.803965    3513 out.go:177] * [multinode-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:29:39.811848    3513 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:29:39.811881    3513 notify.go:220] Checking for updates...
	I0729 10:29:39.818959    3513 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:29:39.820472    3513 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:29:39.822989    3513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:29:39.825981    3513 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:29:39.828992    3513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:29:39.832143    3513 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:29:39.836951    3513 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:29:39.843912    3513 start.go:297] selected driver: qemu2
	I0729 10:29:39.843921    3513 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:29:39.843929    3513 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:29:39.846324    3513 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:29:39.848993    3513 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:29:39.852038    3513 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:29:39.852074    3513 cni.go:84] Creating CNI manager for ""
	I0729 10:29:39.852081    3513 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 10:29:39.852085    3513 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:29:39.852119    3513 start.go:340] cluster config:
	{Name:multinode-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:29:39.855918    3513 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:29:39.863997    3513 out.go:177] * Starting "multinode-937000" primary control-plane node in "multinode-937000" cluster
	I0729 10:29:39.866835    3513 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:29:39.866848    3513 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:29:39.866857    3513 cache.go:56] Caching tarball of preloaded images
	I0729 10:29:39.866910    3513 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:29:39.866916    3513 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:29:39.867136    3513 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/multinode-937000/config.json ...
	I0729 10:29:39.867147    3513 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/multinode-937000/config.json: {Name:mke165a14b4f960a4bf25134ce05ea8de97ffa80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:29:39.867562    3513 start.go:360] acquireMachinesLock for multinode-937000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:29:39.867601    3513 start.go:364] duration metric: took 31.125µs to acquireMachinesLock for "multinode-937000"
	I0729 10:29:39.867613    3513 start.go:93] Provisioning new machine with config: &{Name:multinode-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:29:39.867648    3513 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:29:39.874912    3513 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:29:39.892067    3513 start.go:159] libmachine.API.Create for "multinode-937000" (driver="qemu2")
	I0729 10:29:39.892105    3513 client.go:168] LocalClient.Create starting
	I0729 10:29:39.892175    3513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:29:39.892206    3513 main.go:141] libmachine: Decoding PEM data...
	I0729 10:29:39.892216    3513 main.go:141] libmachine: Parsing certificate...
	I0729 10:29:39.892264    3513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:29:39.892287    3513 main.go:141] libmachine: Decoding PEM data...
	I0729 10:29:39.892297    3513 main.go:141] libmachine: Parsing certificate...
	I0729 10:29:39.892658    3513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:29:40.046473    3513 main.go:141] libmachine: Creating SSH key...
	I0729 10:29:40.088062    3513 main.go:141] libmachine: Creating Disk image...
	I0729 10:29:40.088067    3513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:29:40.088237    3513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:29:40.097287    3513 main.go:141] libmachine: STDOUT: 
	I0729 10:29:40.097304    3513 main.go:141] libmachine: STDERR: 
	I0729 10:29:40.097352    3513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2 +20000M
	I0729 10:29:40.105039    3513 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:29:40.105053    3513 main.go:141] libmachine: STDERR: 
	I0729 10:29:40.105064    3513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:29:40.105068    3513 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:29:40.105081    3513 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:29:40.105104    3513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:e5:dc:f8:86:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:29:40.106703    3513 main.go:141] libmachine: STDOUT: 
	I0729 10:29:40.106719    3513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:29:40.106738    3513 client.go:171] duration metric: took 214.634208ms to LocalClient.Create
	I0729 10:29:42.108826    3513 start.go:128] duration metric: took 2.241264833s to createHost
	I0729 10:29:42.108894    3513 start.go:83] releasing machines lock for "multinode-937000", held for 2.241389625s
	W0729 10:29:42.108935    3513 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:29:42.123352    3513 out.go:177] * Deleting "multinode-937000" in qemu2 ...
	W0729 10:29:42.153208    3513 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:29:42.153240    3513 start.go:729] Will try again in 5 seconds ...
	I0729 10:29:47.155252    3513 start.go:360] acquireMachinesLock for multinode-937000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:29:47.155776    3513 start.go:364] duration metric: took 346.959µs to acquireMachinesLock for "multinode-937000"
	I0729 10:29:47.155933    3513 start.go:93] Provisioning new machine with config: &{Name:multinode-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:29:47.156262    3513 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:29:47.169813    3513 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:29:47.218819    3513 start.go:159] libmachine.API.Create for "multinode-937000" (driver="qemu2")
	I0729 10:29:47.218876    3513 client.go:168] LocalClient.Create starting
	I0729 10:29:47.218995    3513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:29:47.219054    3513 main.go:141] libmachine: Decoding PEM data...
	I0729 10:29:47.219072    3513 main.go:141] libmachine: Parsing certificate...
	I0729 10:29:47.219140    3513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:29:47.219185    3513 main.go:141] libmachine: Decoding PEM data...
	I0729 10:29:47.219201    3513 main.go:141] libmachine: Parsing certificate...
	I0729 10:29:47.219764    3513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:29:47.381861    3513 main.go:141] libmachine: Creating SSH key...
	I0729 10:29:47.450554    3513 main.go:141] libmachine: Creating Disk image...
	I0729 10:29:47.450559    3513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:29:47.450720    3513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:29:47.459796    3513 main.go:141] libmachine: STDOUT: 
	I0729 10:29:47.459811    3513 main.go:141] libmachine: STDERR: 
	I0729 10:29:47.459859    3513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2 +20000M
	I0729 10:29:47.467596    3513 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:29:47.467608    3513 main.go:141] libmachine: STDERR: 
	I0729 10:29:47.467618    3513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:29:47.467624    3513 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:29:47.467634    3513 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:29:47.467666    3513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:59:64:a8:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:29:47.469223    3513 main.go:141] libmachine: STDOUT: 
	I0729 10:29:47.469236    3513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:29:47.469247    3513 client.go:171] duration metric: took 250.378666ms to LocalClient.Create
	I0729 10:29:49.471333    3513 start.go:128] duration metric: took 2.315153667s to createHost
	I0729 10:29:49.471481    3513 start.go:83] releasing machines lock for "multinode-937000", held for 2.315712584s
	W0729 10:29:49.471861    3513 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:29:49.481589    3513 out.go:177] 
	W0729 10:29:49.492726    3513 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:29:49.492764    3513 out.go:239] * 
	* 
	W0729 10:29:49.495391    3513 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:29:49.504562    3513 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-937000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (66.143875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (76.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.521416ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-937000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- rollout status deployment/busybox: exit status 1 (58.069708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.123792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.514583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.873875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.705167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.322083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.56225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.30575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.171667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.627667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0729 10:31:03.483214    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.81975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.268708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.02425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.863875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.881375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (29.128792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (76.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-937000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.23425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (29.71775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-937000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-937000 -v 3 --alsologtostderr: exit status 83 (42.485792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-937000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-937000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:06.647027    3883 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:06.647207    3883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:06.647210    3883 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:06.647212    3883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:06.647351    3883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:06.647598    3883 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:06.647781    3883 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:06.652740    3883 out.go:177] * The control-plane node multinode-937000 host is not running: state=Stopped
	I0729 10:31:06.656666    3883 out.go:177]   To start a cluster, run: "minikube start -p multinode-937000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-937000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (30.256625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-937000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-937000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.551167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-937000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-937000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-937000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (29.5885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-937000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-937000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-937000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-937000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (29.380458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status --output json --alsologtostderr: exit status 7 (28.877334ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-937000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:06.852534    3895 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:06.852706    3895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:06.852709    3895 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:06.852711    3895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:06.852858    3895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:06.852970    3895 out.go:298] Setting JSON to true
	I0729 10:31:06.852985    3895 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:06.853038    3895 notify.go:220] Checking for updates...
	I0729 10:31:06.853188    3895 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:06.853196    3895 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:06.853404    3895 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:06.853408    3895 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:06.853411    3895 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-937000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (28.449042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 node stop m03: exit status 85 (46.0925ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-937000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status: exit status 7 (29.69525ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr: exit status 7 (29.397875ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:06.986920    3903 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:06.987057    3903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:06.987060    3903 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:06.987063    3903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:06.987223    3903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:06.987333    3903 out.go:298] Setting JSON to false
	I0729 10:31:06.987343    3903 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:06.987399    3903 notify.go:220] Checking for updates...
	I0729 10:31:06.987546    3903 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:06.987554    3903 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:06.987751    3903 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:06.987754    3903 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:06.987756    3903 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr": multinode-937000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (29.566792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.968958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:07.046291    3907 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:07.046529    3907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:07.046534    3907 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:07.046536    3907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:07.046666    3907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:07.046885    3907 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:07.047077    3907 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:07.051554    3907 out.go:177] 
	W0729 10:31:07.055665    3907 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 10:31:07.055670    3907 out.go:239] * 
	* 
	W0729 10:31:07.057293    3907 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:31:07.060688    3907 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 10:31:07.046291    3907 out.go:291] Setting OutFile to fd 1 ...
I0729 10:31:07.046529    3907 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:31:07.046534    3907 out.go:304] Setting ErrFile to fd 2...
I0729 10:31:07.046536    3907 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:31:07.046666    3907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
I0729 10:31:07.046885    3907 mustload.go:65] Loading cluster: multinode-937000
I0729 10:31:07.047077    3907 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:31:07.051554    3907 out.go:177] 
W0729 10:31:07.055665    3907 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 10:31:07.055670    3907 out.go:239] * 
* 
W0729 10:31:07.057293    3907 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 10:31:07.060688    3907 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-937000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (29.503084ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:07.093461    3909 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:07.093595    3909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:07.093598    3909 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:07.093601    3909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:07.093726    3909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:07.093831    3909 out.go:298] Setting JSON to false
	I0729 10:31:07.093841    3909 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:07.093907    3909 notify.go:220] Checking for updates...
	I0729 10:31:07.094062    3909 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:07.094069    3909 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:07.094275    3909 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:07.094278    3909 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:07.094280    3909 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (72.103417ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:08.618580    3911 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:08.618814    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:08.618818    3911 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:08.618822    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:08.618985    3911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:08.619143    3911 out.go:298] Setting JSON to false
	I0729 10:31:08.619160    3911 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:08.619208    3911 notify.go:220] Checking for updates...
	I0729 10:31:08.619447    3911 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:08.619456    3911 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:08.619765    3911 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:08.619770    3911 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:08.619773    3911 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (72.272ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:09.772519    3913 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:09.772719    3913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:09.772724    3913 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:09.772727    3913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:09.772897    3913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:09.773050    3913 out.go:298] Setting JSON to false
	I0729 10:31:09.773062    3913 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:09.773104    3913 notify.go:220] Checking for updates...
	I0729 10:31:09.773313    3913 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:09.773321    3913 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:09.773582    3913 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:09.773587    3913 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:09.773590    3913 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (72.039542ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:11.565206    3919 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:11.565407    3919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:11.565412    3919 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:11.565415    3919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:11.565607    3919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:11.565795    3919 out.go:298] Setting JSON to false
	I0729 10:31:11.565807    3919 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:11.565847    3919 notify.go:220] Checking for updates...
	I0729 10:31:11.566053    3919 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:11.566062    3919 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:11.566327    3919 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:11.566332    3919 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:11.566335    3919 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (71.556333ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:15.949019    3921 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:15.949257    3921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:15.949261    3921 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:15.949264    3921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:15.949439    3921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:15.949595    3921 out.go:298] Setting JSON to false
	I0729 10:31:15.949613    3921 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:15.949652    3921 notify.go:220] Checking for updates...
	I0729 10:31:15.949841    3921 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:15.949849    3921 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:15.950114    3921 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:15.950119    3921 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:15.950122    3921 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (74.019ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:18.920674    3923 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:18.920890    3923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:18.920895    3923 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:18.920898    3923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:18.921122    3923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:18.921286    3923 out.go:298] Setting JSON to false
	I0729 10:31:18.921304    3923 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:18.921344    3923 notify.go:220] Checking for updates...
	I0729 10:31:18.921586    3923 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:18.921595    3923 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:18.921912    3923 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:18.921917    3923 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:18.921920    3923 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (73.782125ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:26.544568    3925 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:26.544761    3925 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:26.544766    3925 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:26.544769    3925 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:26.544954    3925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:26.545110    3925 out.go:298] Setting JSON to false
	I0729 10:31:26.545122    3925 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:26.545164    3925 notify.go:220] Checking for updates...
	I0729 10:31:26.545373    3925 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:26.545385    3925 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:26.545643    3925 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:26.545647    3925 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:26.545650    3925 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (76.908541ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:33.095843    3929 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:33.096063    3929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:33.096068    3929 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:33.096071    3929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:33.096298    3929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:33.096478    3929 out.go:298] Setting JSON to false
	I0729 10:31:33.096493    3929 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:33.096542    3929 notify.go:220] Checking for updates...
	I0729 10:31:33.096805    3929 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:33.096816    3929 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:33.097142    3929 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:33.097147    3929 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:33.097151    3929 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr: exit status 7 (72.122375ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:49.627072    3939 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:49.627268    3939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:49.627273    3939 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:49.627277    3939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:49.627469    3939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:49.627655    3939 out.go:298] Setting JSON to false
	I0729 10:31:49.627668    3939 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:49.627714    3939 notify.go:220] Checking for updates...
	I0729 10:31:49.627951    3939 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:49.627961    3939 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:49.628272    3939 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:49.628277    3939 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:49.628281    3939 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-937000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (33.777291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (42.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-937000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-937000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-937000: (3.4582485s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-937000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-937000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.213098583s)

                                                
                                                
-- stdout --
	* [multinode-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-937000" primary control-plane node in "multinode-937000" cluster
	* Restarting existing qemu2 VM for "multinode-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:53.209261    3963 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:53.209439    3963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:53.209443    3963 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:53.209446    3963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:53.209629    3963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:53.210809    3963 out.go:298] Setting JSON to false
	I0729 10:31:53.229692    3963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3677,"bootTime":1722270636,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:31:53.229768    3963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:31:53.233773    3963 out.go:177] * [multinode-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:31:53.240813    3963 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:31:53.240859    3963 notify.go:220] Checking for updates...
	I0729 10:31:53.247750    3963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:31:53.250777    3963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:31:53.253701    3963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:31:53.256733    3963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:31:53.259861    3963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:31:53.261494    3963 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:53.261557    3963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:31:53.265696    3963 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:31:53.272593    3963 start.go:297] selected driver: qemu2
	I0729 10:31:53.272601    3963 start.go:901] validating driver "qemu2" against &{Name:multinode-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:31:53.272665    3963 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:31:53.275004    3963 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:31:53.275050    3963 cni.go:84] Creating CNI manager for ""
	I0729 10:31:53.275055    3963 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:31:53.275104    3963 start.go:340] cluster config:
	{Name:multinode-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-937000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:31:53.278801    3963 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:31:53.285722    3963 out.go:177] * Starting "multinode-937000" primary control-plane node in "multinode-937000" cluster
	I0729 10:31:53.289687    3963 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:31:53.289704    3963 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:31:53.289717    3963 cache.go:56] Caching tarball of preloaded images
	I0729 10:31:53.289784    3963 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:31:53.289794    3963 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:31:53.289850    3963 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/multinode-937000/config.json ...
	I0729 10:31:53.290261    3963 start.go:360] acquireMachinesLock for multinode-937000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:31:53.290295    3963 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "multinode-937000"
	I0729 10:31:53.290305    3963 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:31:53.290311    3963 fix.go:54] fixHost starting: 
	I0729 10:31:53.290435    3963 fix.go:112] recreateIfNeeded on multinode-937000: state=Stopped err=<nil>
	W0729 10:31:53.290445    3963 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:31:53.298743    3963 out.go:177] * Restarting existing qemu2 VM for "multinode-937000" ...
	I0729 10:31:53.302708    3963 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:31:53.302749    3963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:59:64:a8:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:31:53.304746    3963 main.go:141] libmachine: STDOUT: 
	I0729 10:31:53.304764    3963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:31:53.304792    3963 fix.go:56] duration metric: took 14.481166ms for fixHost
	I0729 10:31:53.304796    3963 start.go:83] releasing machines lock for "multinode-937000", held for 14.496667ms
	W0729 10:31:53.304803    3963 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:31:53.304838    3963 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:31:53.304842    3963 start.go:729] Will try again in 5 seconds ...
	I0729 10:31:58.306787    3963 start.go:360] acquireMachinesLock for multinode-937000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:31:58.307241    3963 start.go:364] duration metric: took 349.709µs to acquireMachinesLock for "multinode-937000"
	I0729 10:31:58.307379    3963 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:31:58.307405    3963 fix.go:54] fixHost starting: 
	I0729 10:31:58.308204    3963 fix.go:112] recreateIfNeeded on multinode-937000: state=Stopped err=<nil>
	W0729 10:31:58.308230    3963 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:31:58.315597    3963 out.go:177] * Restarting existing qemu2 VM for "multinode-937000" ...
	I0729 10:31:58.319682    3963 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:31:58.319923    3963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:59:64:a8:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:31:58.329230    3963 main.go:141] libmachine: STDOUT: 
	I0729 10:31:58.329298    3963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:31:58.329379    3963 fix.go:56] duration metric: took 21.979833ms for fixHost
	I0729 10:31:58.329393    3963 start.go:83] releasing machines lock for "multinode-937000", held for 22.130958ms
	W0729 10:31:58.329612    3963 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:31:58.337605    3963 out.go:177] 
	W0729 10:31:58.340650    3963 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:31:58.340673    3963 out.go:239] * 
	* 
	W0729 10:31:58.343450    3963 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:31:58.351626    3963 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-937000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-937000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (32.300959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 node delete m03: exit status 83 (40.060583ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-937000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-937000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-937000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr: exit status 7 (28.808042ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:31:58.534998    3982 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:31:58.535134    3982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:58.535138    3982 out.go:304] Setting ErrFile to fd 2...
	I0729 10:31:58.535140    3982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:31:58.535274    3982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:31:58.535387    3982 out.go:298] Setting JSON to false
	I0729 10:31:58.535397    3982 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:31:58.535466    3982 notify.go:220] Checking for updates...
	I0729 10:31:58.535575    3982 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:31:58.535584    3982 status.go:255] checking status of multinode-937000 ...
	I0729 10:31:58.535797    3982 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:31:58.535801    3982 status.go:343] host is not running, skipping remaining checks
	I0729 10:31:58.535803    3982 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (29.914625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-937000 stop: (2.008562042s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status: exit status 7 (63.944ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr: exit status 7 (31.966458ms)

                                                
                                                
-- stdout --
	multinode-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:32:00.669857    4000 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:32:00.670005    4000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:32:00.670008    4000 out.go:304] Setting ErrFile to fd 2...
	I0729 10:32:00.670010    4000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:32:00.670138    4000 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:32:00.670255    4000 out.go:298] Setting JSON to false
	I0729 10:32:00.670265    4000 mustload.go:65] Loading cluster: multinode-937000
	I0729 10:32:00.670328    4000 notify.go:220] Checking for updates...
	I0729 10:32:00.670441    4000 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:32:00.670452    4000 status.go:255] checking status of multinode-937000 ...
	I0729 10:32:00.670677    4000 status.go:330] multinode-937000 host status = "Stopped" (err=<nil>)
	I0729 10:32:00.670681    4000 status.go:343] host is not running, skipping remaining checks
	I0729 10:32:00.670683    4000 status.go:257] multinode-937000 status: &{Name:multinode-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr": multinode-937000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-937000 status --alsologtostderr": multinode-937000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (28.979542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-937000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-937000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.177595333s)

                                                
                                                
-- stdout --
	* [multinode-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-937000" primary control-plane node in "multinode-937000" cluster
	* Restarting existing qemu2 VM for "multinode-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:32:00.727547    4004 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:32:00.727673    4004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:32:00.727676    4004 out.go:304] Setting ErrFile to fd 2...
	I0729 10:32:00.727679    4004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:32:00.727805    4004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:32:00.728900    4004 out.go:298] Setting JSON to false
	I0729 10:32:00.745078    4004 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3684,"bootTime":1722270636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:32:00.745152    4004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:32:00.750379    4004 out.go:177] * [multinode-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:32:00.757396    4004 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:32:00.757425    4004 notify.go:220] Checking for updates...
	I0729 10:32:00.765366    4004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:32:00.768340    4004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:32:00.772017    4004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:32:00.775429    4004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:32:00.776785    4004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:32:00.779605    4004 config.go:182] Loaded profile config "multinode-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:32:00.779856    4004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:32:00.784303    4004 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:32:00.789380    4004 start.go:297] selected driver: qemu2
	I0729 10:32:00.789389    4004 start.go:901] validating driver "qemu2" against &{Name:multinode-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:32:00.789459    4004 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:32:00.791706    4004 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:32:00.791728    4004 cni.go:84] Creating CNI manager for ""
	I0729 10:32:00.791731    4004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:32:00.791783    4004 start.go:340] cluster config:
	{Name:multinode-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-937000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:32:00.795067    4004 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:00.802291    4004 out.go:177] * Starting "multinode-937000" primary control-plane node in "multinode-937000" cluster
	I0729 10:32:00.806294    4004 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:32:00.806309    4004 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:32:00.806321    4004 cache.go:56] Caching tarball of preloaded images
	I0729 10:32:00.806370    4004 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:32:00.806375    4004 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:32:00.806436    4004 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/multinode-937000/config.json ...
	I0729 10:32:00.806821    4004 start.go:360] acquireMachinesLock for multinode-937000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:32:00.806846    4004 start.go:364] duration metric: took 19.75µs to acquireMachinesLock for "multinode-937000"
	I0729 10:32:00.806855    4004 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:32:00.806861    4004 fix.go:54] fixHost starting: 
	I0729 10:32:00.806969    4004 fix.go:112] recreateIfNeeded on multinode-937000: state=Stopped err=<nil>
	W0729 10:32:00.806977    4004 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:32:00.811232    4004 out.go:177] * Restarting existing qemu2 VM for "multinode-937000" ...
	I0729 10:32:00.819322    4004 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:32:00.819359    4004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:59:64:a8:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:32:00.821274    4004 main.go:141] libmachine: STDOUT: 
	I0729 10:32:00.821292    4004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:32:00.821320    4004 fix.go:56] duration metric: took 14.459916ms for fixHost
	I0729 10:32:00.821324    4004 start.go:83] releasing machines lock for "multinode-937000", held for 14.474292ms
	W0729 10:32:00.821329    4004 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:32:00.821369    4004 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:32:00.821374    4004 start.go:729] Will try again in 5 seconds ...
	I0729 10:32:05.823357    4004 start.go:360] acquireMachinesLock for multinode-937000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:32:05.823736    4004 start.go:364] duration metric: took 300.667µs to acquireMachinesLock for "multinode-937000"
	I0729 10:32:05.823856    4004 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:32:05.823875    4004 fix.go:54] fixHost starting: 
	I0729 10:32:05.824557    4004 fix.go:112] recreateIfNeeded on multinode-937000: state=Stopped err=<nil>
	W0729 10:32:05.824583    4004 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:32:05.828995    4004 out.go:177] * Restarting existing qemu2 VM for "multinode-937000" ...
	I0729 10:32:05.832945    4004 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:32:05.833179    4004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:59:64:a8:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/multinode-937000/disk.qcow2
	I0729 10:32:05.841928    4004 main.go:141] libmachine: STDOUT: 
	I0729 10:32:05.841979    4004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:32:05.842052    4004 fix.go:56] duration metric: took 18.179917ms for fixHost
	I0729 10:32:05.842066    4004 start.go:83] releasing machines lock for "multinode-937000", held for 18.308541ms
	W0729 10:32:05.842194    4004 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:32:05.848900    4004 out.go:177] 
	W0729 10:32:05.853075    4004 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:32:05.853106    4004 out.go:239] * 
	* 
	W0729 10:32:05.855522    4004 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:32:05.864926    4004 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-937000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (67.845375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-937000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-937000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-937000-m01 --driver=qemu2 : exit status 80 (9.827301833s)

                                                
                                                
-- stdout --
	* [multinode-937000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-937000-m01" primary control-plane node in "multinode-937000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-937000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-937000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-937000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-937000-m02 --driver=qemu2 : exit status 80 (10.05914975s)

                                                
                                                
-- stdout --
	* [multinode-937000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-937000-m02" primary control-plane node in "multinode-937000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-937000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-937000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-937000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-937000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-937000: exit status 83 (79.213208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-937000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-937000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-937000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-937000 -n multinode-937000: exit status 7 (29.429917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                    
x
+
TestPreload (10.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-886000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-886000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.022923833s)

                                                
                                                
-- stdout --
	* [test-preload-886000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-886000" primary control-plane node in "test-preload-886000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-886000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:32:26.195145    4058 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:32:26.195273    4058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:32:26.195277    4058 out.go:304] Setting ErrFile to fd 2...
	I0729 10:32:26.195280    4058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:32:26.195410    4058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:32:26.196450    4058 out.go:298] Setting JSON to false
	I0729 10:32:26.212318    4058 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3710,"bootTime":1722270636,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:32:26.212375    4058 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:32:26.219022    4058 out.go:177] * [test-preload-886000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:32:26.227021    4058 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:32:26.227087    4058 notify.go:220] Checking for updates...
	I0729 10:32:26.234972    4058 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:32:26.238031    4058 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:32:26.241000    4058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:32:26.243911    4058 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:32:26.246975    4058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:32:26.250233    4058 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:32:26.250300    4058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:32:26.254888    4058 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:32:26.261938    4058 start.go:297] selected driver: qemu2
	I0729 10:32:26.261945    4058 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:32:26.261956    4058 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:32:26.264294    4058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:32:26.266916    4058 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:32:26.270131    4058 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:32:26.270161    4058 cni.go:84] Creating CNI manager for ""
	I0729 10:32:26.270169    4058 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:32:26.270174    4058 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:32:26.270205    4058 start.go:340] cluster config:
	{Name:test-preload-886000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:32:26.273841    4058 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.280964    4058 out.go:177] * Starting "test-preload-886000" primary control-plane node in "test-preload-886000" cluster
	I0729 10:32:26.284935    4058 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 10:32:26.284998    4058 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/test-preload-886000/config.json ...
	I0729 10:32:26.285015    4058 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/test-preload-886000/config.json: {Name:mk6551cec06306b6c4ea9c5d596b86110e9ec905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:32:26.285011    4058 cache.go:107] acquiring lock: {Name:mkb1f50a533710d1e5f59df940c3bc3b51c79688 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.285021    4058 cache.go:107] acquiring lock: {Name:mk439f706b5620e282515f9d352917ca45f398bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.285047    4058 cache.go:107] acquiring lock: {Name:mkedf209bae731f31150dd90ae5ace3d092aee39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.285180    4058 cache.go:107] acquiring lock: {Name:mk6f96457cfef4507dbd5a175cfa665bd95f326c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.285207    4058 cache.go:107] acquiring lock: {Name:mk6a646cb2923c004e16564315538a5abb255c55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.285245    4058 cache.go:107] acquiring lock: {Name:mk725c92c850effdd34127916b63fd92503ad34f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.285254    4058 cache.go:107] acquiring lock: {Name:mk3dff746ab12702a4d4dc3e8136c29efd0449d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.285029    4058 cache.go:107] acquiring lock: {Name:mk26d0ad414e698fbd445f4e17a4aa0084bb48be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:32:26.285473    4058 start.go:360] acquireMachinesLock for test-preload-886000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:32:26.285536    4058 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 10:32:26.285543    4058 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 10:32:26.285554    4058 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 10:32:26.285538    4058 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 10:32:26.285554    4058 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 10:32:26.285600    4058 start.go:364] duration metric: took 99.25µs to acquireMachinesLock for "test-preload-886000"
	I0729 10:32:26.285614    4058 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:32:26.285555    4058 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:32:26.285631    4058 start.go:93] Provisioning new machine with config: &{Name:test-preload-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:32:26.285704    4058 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:32:26.285803    4058 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:32:26.292972    4058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:32:26.297620    4058 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 10:32:26.297760    4058 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 10:32:26.298513    4058 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:32:26.300823    4058 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:32:26.300877    4058 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 10:32:26.300898    4058 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 10:32:26.300906    4058 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 10:32:26.300982    4058 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:32:26.311506    4058 start.go:159] libmachine.API.Create for "test-preload-886000" (driver="qemu2")
	I0729 10:32:26.311530    4058 client.go:168] LocalClient.Create starting
	I0729 10:32:26.311621    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:32:26.311650    4058 main.go:141] libmachine: Decoding PEM data...
	I0729 10:32:26.311660    4058 main.go:141] libmachine: Parsing certificate...
	I0729 10:32:26.311703    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:32:26.311725    4058 main.go:141] libmachine: Decoding PEM data...
	I0729 10:32:26.311731    4058 main.go:141] libmachine: Parsing certificate...
	I0729 10:32:26.312075    4058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:32:26.466357    4058 main.go:141] libmachine: Creating SSH key...
	I0729 10:32:26.680491    4058 main.go:141] libmachine: Creating Disk image...
	I0729 10:32:26.680507    4058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:32:26.680709    4058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2
	I0729 10:32:26.690490    4058 main.go:141] libmachine: STDOUT: 
	I0729 10:32:26.690507    4058 main.go:141] libmachine: STDERR: 
	I0729 10:32:26.690559    4058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2 +20000M
	I0729 10:32:26.698727    4058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:32:26.698745    4058 main.go:141] libmachine: STDERR: 
	I0729 10:32:26.698759    4058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2
	I0729 10:32:26.698762    4058 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:32:26.698776    4058 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:32:26.698805    4058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:a7:b6:66:73:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2
	I0729 10:32:26.700749    4058 main.go:141] libmachine: STDOUT: 
	I0729 10:32:26.700764    4058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:32:26.700779    4058 client.go:171] duration metric: took 389.26525ms to LocalClient.Create
	I0729 10:32:26.732078    4058 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 10:32:26.742337    4058 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 10:32:26.751841    4058 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 10:32:26.803295    4058 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 10:32:26.837313    4058 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0729 10:32:26.893656    4058 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 10:32:26.893697    4058 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 10:32:27.015982    4058 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 10:32:27.016046    4058 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 731.026791ms
	I0729 10:32:27.016093    4058 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 10:32:27.076106    4058 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 10:32:27.076193    4058 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 10:32:27.366154    4058 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 10:32:27.366220    4058 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.081238333s
	I0729 10:32:27.366247    4058 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 10:32:28.156697    4058 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 10:32:28.613693    4058 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 10:32:28.613743    4058 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.328613208s
	I0729 10:32:28.613770    4058 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0729 10:32:28.701046    4058 start.go:128] duration metric: took 2.415427542s to createHost
	I0729 10:32:28.701087    4058 start.go:83] releasing machines lock for "test-preload-886000", held for 2.415583208s
	W0729 10:32:28.701168    4058 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:32:28.717640    4058 out.go:177] * Deleting "test-preload-886000" in qemu2 ...
	W0729 10:32:28.751155    4058 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:32:28.751191    4058 start.go:729] Will try again in 5 seconds ...
	I0729 10:32:29.173853    4058 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 10:32:29.173892    4058 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.888866417s
	I0729 10:32:29.173920    4058 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 10:32:30.908666    4058 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 10:32:30.908710    4058 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.623917333s
	I0729 10:32:30.908766    4058 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 10:32:31.153060    4058 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 10:32:31.153102    4058 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.868112667s
	I0729 10:32:31.153125    4058 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 10:32:32.524822    4058 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 10:32:32.524873    4058 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.24015225s
	I0729 10:32:32.524944    4058 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 10:32:33.751534    4058 start.go:360] acquireMachinesLock for test-preload-886000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:32:33.751948    4058 start.go:364] duration metric: took 332.083µs to acquireMachinesLock for "test-preload-886000"
	I0729 10:32:33.752065    4058 start.go:93] Provisioning new machine with config: &{Name:test-preload-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:32:33.752338    4058 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:32:33.761940    4058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:32:33.813041    4058 start.go:159] libmachine.API.Create for "test-preload-886000" (driver="qemu2")
	I0729 10:32:33.813091    4058 client.go:168] LocalClient.Create starting
	I0729 10:32:33.813213    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:32:33.813285    4058 main.go:141] libmachine: Decoding PEM data...
	I0729 10:32:33.813304    4058 main.go:141] libmachine: Parsing certificate...
	I0729 10:32:33.813361    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:32:33.813405    4058 main.go:141] libmachine: Decoding PEM data...
	I0729 10:32:33.813417    4058 main.go:141] libmachine: Parsing certificate...
	I0729 10:32:33.813925    4058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:32:33.976985    4058 main.go:141] libmachine: Creating SSH key...
	I0729 10:32:34.127531    4058 main.go:141] libmachine: Creating Disk image...
	I0729 10:32:34.127538    4058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:32:34.127721    4058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2
	I0729 10:32:34.137333    4058 main.go:141] libmachine: STDOUT: 
	I0729 10:32:34.137349    4058 main.go:141] libmachine: STDERR: 
	I0729 10:32:34.137393    4058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2 +20000M
	I0729 10:32:34.145244    4058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:32:34.145260    4058 main.go:141] libmachine: STDERR: 
	I0729 10:32:34.145277    4058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2
	I0729 10:32:34.145282    4058 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:32:34.145295    4058 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:32:34.145329    4058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c1:40:ee:ca:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/test-preload-886000/disk.qcow2
	I0729 10:32:34.147034    4058 main.go:141] libmachine: STDOUT: 
	I0729 10:32:34.147051    4058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:32:34.147063    4058 client.go:171] duration metric: took 333.980417ms to LocalClient.Create
	I0729 10:32:34.609486    4058 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0729 10:32:34.609537    4058 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.324738959s
	I0729 10:32:34.609561    4058 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0729 10:32:34.609600    4058 cache.go:87] Successfully saved all images to host disk.
	I0729 10:32:36.149260    4058 start.go:128] duration metric: took 2.396961875s to createHost
	I0729 10:32:36.149374    4058 start.go:83] releasing machines lock for "test-preload-886000", held for 2.397511917s
	W0729 10:32:36.149813    4058 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:32:36.158433    4058 out.go:177] 
	W0729 10:32:36.162537    4058 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:32:36.162571    4058 out.go:239] * 
	* 
	W0729 10:32:36.165095    4058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:32:36.175286    4058 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-886000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 10:32:36.193782 -0700 PDT m=+2253.018691251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-886000 -n test-preload-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-886000 -n test-preload-886000: exit status 7 (65.947208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-886000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-886000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-886000
--- FAIL: TestPreload (10.17s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-858000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-858000 --memory=2048 --driver=qemu2 : exit status 80 (9.838306708s)

                                                
                                                
-- stdout --
	* [scheduled-stop-858000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-858000" primary control-plane node in "scheduled-stop-858000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-858000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-858000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-858000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-858000" primary control-plane node in "scheduled-stop-858000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-858000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-858000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 10:32:46.178419 -0700 PDT m=+2263.003803376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-858000 -n scheduled-stop-858000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-858000 -n scheduled-stop-858000: exit status 7 (68.482375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-858000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-858000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-858000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (13.48s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1363994148 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1363994148 version: (1.03261875s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-360000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-360000 --memory=2600 --driver=qemu2 : exit status 80 (9.935370416s)

                                                
                                                
-- stdout --
	* [skaffold-360000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-360000" primary control-plane node in "skaffold-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-360000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-360000" primary control-plane node in "skaffold-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 10:32:59.660035 -0700 PDT m=+2276.486060460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-360000 -n skaffold-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-360000 -n skaffold-360000: exit status 7 (63.983709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-360000
--- FAIL: TestSkaffold (13.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (627.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2877685620 start -p running-upgrade-466000 --memory=2200 --vm-driver=qemu2 
E0729 10:34:17.342047    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2877685620 start -p running-upgrade-466000 --memory=2200 --vm-driver=qemu2 : (1m22.575152541s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-466000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0729 10:36:03.469530    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-466000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m27.295535291s)

                                                
                                                
-- stdout --
	* [running-upgrade-466000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-466000" primary control-plane node in "running-upgrade-466000" cluster
	* Updating the running qemu2 "running-upgrade-466000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:35:07.254745    4497 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:35:07.254874    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:35:07.254878    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:35:07.254880    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:35:07.255009    4497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:35:07.256020    4497 out.go:298] Setting JSON to false
	I0729 10:35:07.272290    4497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3871,"bootTime":1722270636,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:35:07.272366    4497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:35:07.276905    4497 out.go:177] * [running-upgrade-466000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:35:07.282894    4497 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:35:07.282930    4497 notify.go:220] Checking for updates...
	I0729 10:35:07.289837    4497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:35:07.292762    4497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:35:07.296003    4497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:35:07.298833    4497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:35:07.300082    4497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:35:07.303095    4497 config.go:182] Loaded profile config "running-upgrade-466000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:35:07.305826    4497 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 10:35:07.308842    4497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:35:07.312854    4497 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:35:07.319835    4497 start.go:297] selected driver: qemu2
	I0729 10:35:07.319843    4497 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:35:07.319890    4497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:35:07.322134    4497 cni.go:84] Creating CNI manager for ""
	I0729 10:35:07.322151    4497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:35:07.322176    4497 start.go:340] cluster config:
	{Name:running-upgrade-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:35:07.322227    4497 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:35:07.329853    4497 out.go:177] * Starting "running-upgrade-466000" primary control-plane node in "running-upgrade-466000" cluster
	I0729 10:35:07.333741    4497 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:35:07.333753    4497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 10:35:07.333759    4497 cache.go:56] Caching tarball of preloaded images
	I0729 10:35:07.333810    4497 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:35:07.333819    4497 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 10:35:07.333866    4497 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/config.json ...
	I0729 10:35:07.334285    4497 start.go:360] acquireMachinesLock for running-upgrade-466000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:35:07.334318    4497 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "running-upgrade-466000"
	I0729 10:35:07.334327    4497 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:35:07.334332    4497 fix.go:54] fixHost starting: 
	I0729 10:35:07.334879    4497 fix.go:112] recreateIfNeeded on running-upgrade-466000: state=Running err=<nil>
	W0729 10:35:07.334887    4497 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:35:07.342815    4497 out.go:177] * Updating the running qemu2 "running-upgrade-466000" VM ...
	I0729 10:35:07.346815    4497 machine.go:94] provisionDockerMachine start ...
	I0729 10:35:07.346846    4497 main.go:141] libmachine: Using SSH client type: native
	I0729 10:35:07.346945    4497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101382a10] 0x101385270 <nil>  [] 0s} localhost 50276 <nil> <nil>}
	I0729 10:35:07.346949    4497 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:35:07.420014    4497 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-466000
	
	I0729 10:35:07.420032    4497 buildroot.go:166] provisioning hostname "running-upgrade-466000"
	I0729 10:35:07.420076    4497 main.go:141] libmachine: Using SSH client type: native
	I0729 10:35:07.420189    4497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101382a10] 0x101385270 <nil>  [] 0s} localhost 50276 <nil> <nil>}
	I0729 10:35:07.420194    4497 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-466000 && echo "running-upgrade-466000" | sudo tee /etc/hostname
	I0729 10:35:07.495149    4497 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-466000
	
	I0729 10:35:07.495204    4497 main.go:141] libmachine: Using SSH client type: native
	I0729 10:35:07.495323    4497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101382a10] 0x101385270 <nil>  [] 0s} localhost 50276 <nil> <nil>}
	I0729 10:35:07.495332    4497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-466000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-466000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-466000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:35:07.568283    4497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:35:07.568294    4497 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19345-1151/.minikube CaCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19345-1151/.minikube}
	I0729 10:35:07.568301    4497 buildroot.go:174] setting up certificates
	I0729 10:35:07.568305    4497 provision.go:84] configureAuth start
	I0729 10:35:07.568314    4497 provision.go:143] copyHostCerts
	I0729 10:35:07.568382    4497 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem, removing ...
	I0729 10:35:07.568386    4497 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem
	I0729 10:35:07.568518    4497 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem (1675 bytes)
	I0729 10:35:07.568707    4497 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem, removing ...
	I0729 10:35:07.568711    4497 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem
	I0729 10:35:07.568766    4497 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem (1082 bytes)
	I0729 10:35:07.568869    4497 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem, removing ...
	I0729 10:35:07.568872    4497 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem
	I0729 10:35:07.568921    4497 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem (1123 bytes)
	I0729 10:35:07.569023    4497 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-466000 san=[127.0.0.1 localhost minikube running-upgrade-466000]
	I0729 10:35:07.628266    4497 provision.go:177] copyRemoteCerts
	I0729 10:35:07.628326    4497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:35:07.628335    4497 sshutil.go:53] new ssh client: &{IP:localhost Port:50276 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/running-upgrade-466000/id_rsa Username:docker}
	I0729 10:35:07.669328    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:35:07.676146    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 10:35:07.683085    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:35:07.689658    4497 provision.go:87] duration metric: took 121.353042ms to configureAuth
	I0729 10:35:07.689667    4497 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:35:07.689771    4497 config.go:182] Loaded profile config "running-upgrade-466000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:35:07.689803    4497 main.go:141] libmachine: Using SSH client type: native
	I0729 10:35:07.689893    4497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101382a10] 0x101385270 <nil>  [] 0s} localhost 50276 <nil> <nil>}
	I0729 10:35:07.689897    4497 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 10:35:07.761689    4497 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 10:35:07.761700    4497 buildroot.go:70] root file system type: tmpfs
	I0729 10:35:07.761752    4497 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 10:35:07.761793    4497 main.go:141] libmachine: Using SSH client type: native
	I0729 10:35:07.761891    4497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101382a10] 0x101385270 <nil>  [] 0s} localhost 50276 <nil> <nil>}
	I0729 10:35:07.761931    4497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 10:35:07.839959    4497 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 10:35:07.840015    4497 main.go:141] libmachine: Using SSH client type: native
	I0729 10:35:07.840135    4497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101382a10] 0x101385270 <nil>  [] 0s} localhost 50276 <nil> <nil>}
	I0729 10:35:07.840143    4497 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 10:35:07.916579    4497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:35:07.916592    4497 machine.go:97] duration metric: took 569.798917ms to provisionDockerMachine
	I0729 10:35:07.916598    4497 start.go:293] postStartSetup for "running-upgrade-466000" (driver="qemu2")
	I0729 10:35:07.916605    4497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:35:07.916655    4497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:35:07.916664    4497 sshutil.go:53] new ssh client: &{IP:localhost Port:50276 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/running-upgrade-466000/id_rsa Username:docker}
	I0729 10:35:07.954922    4497 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:35:07.956325    4497 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 10:35:07.956331    4497 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19345-1151/.minikube/addons for local assets ...
	I0729 10:35:07.956400    4497 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19345-1151/.minikube/files for local assets ...
	I0729 10:35:07.956512    4497 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem -> 16482.pem in /etc/ssl/certs
	I0729 10:35:07.956632    4497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:35:07.958981    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem --> /etc/ssl/certs/16482.pem (1708 bytes)
	I0729 10:35:07.966033    4497 start.go:296] duration metric: took 49.4315ms for postStartSetup
	I0729 10:35:07.966047    4497 fix.go:56] duration metric: took 631.746042ms for fixHost
	I0729 10:35:07.966079    4497 main.go:141] libmachine: Using SSH client type: native
	I0729 10:35:07.966183    4497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101382a10] 0x101385270 <nil>  [] 0s} localhost 50276 <nil> <nil>}
	I0729 10:35:07.966187    4497 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 10:35:08.038420    4497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722274508.222736676
	
	I0729 10:35:08.038431    4497 fix.go:216] guest clock: 1722274508.222736676
	I0729 10:35:08.038435    4497 fix.go:229] Guest: 2024-07-29 10:35:08.222736676 -0700 PDT Remote: 2024-07-29 10:35:07.966049 -0700 PDT m=+0.730981876 (delta=256.687676ms)
	I0729 10:35:08.038445    4497 fix.go:200] guest clock delta is within tolerance: 256.687676ms
	I0729 10:35:08.038448    4497 start.go:83] releasing machines lock for "running-upgrade-466000", held for 704.159625ms
	I0729 10:35:08.038511    4497 ssh_runner.go:195] Run: cat /version.json
	I0729 10:35:08.038517    4497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:35:08.038523    4497 sshutil.go:53] new ssh client: &{IP:localhost Port:50276 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/running-upgrade-466000/id_rsa Username:docker}
	I0729 10:35:08.038539    4497 sshutil.go:53] new ssh client: &{IP:localhost Port:50276 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/running-upgrade-466000/id_rsa Username:docker}
	W0729 10:35:08.039047    4497 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50276: connect: connection refused
	I0729 10:35:08.039069    4497 retry.go:31] will retry after 217.440321ms: dial tcp [::1]:50276: connect: connection refused
	W0729 10:35:08.076330    4497 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 10:35:08.076379    4497 ssh_runner.go:195] Run: systemctl --version
	I0729 10:35:08.078195    4497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:35:08.079919    4497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:35:08.079946    4497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 10:35:08.082648    4497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 10:35:08.087337    4497 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:35:08.087344    4497 start.go:495] detecting cgroup driver to use...
	I0729 10:35:08.087404    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:35:08.092698    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 10:35:08.095629    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 10:35:08.098744    4497 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 10:35:08.098769    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 10:35:08.102265    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:35:08.105880    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 10:35:08.109250    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:35:08.112236    4497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:35:08.115223    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 10:35:08.118510    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 10:35:08.122055    4497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 10:35:08.125309    4497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:35:08.128029    4497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:35:08.130752    4497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:35:08.225038    4497 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 10:35:08.232210    4497 start.go:495] detecting cgroup driver to use...
	I0729 10:35:08.232281    4497 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 10:35:08.238016    4497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:35:08.244304    4497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:35:08.250150    4497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:35:08.255173    4497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:35:08.261872    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:35:08.268148    4497 ssh_runner.go:195] Run: which cri-dockerd
	I0729 10:35:08.269335    4497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 10:35:08.271709    4497 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 10:35:08.276569    4497 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 10:35:08.366319    4497 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 10:35:08.462433    4497 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 10:35:08.462498    4497 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 10:35:08.467667    4497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:35:08.556295    4497 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:35:11.250038    4497 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.693854875s)
	I0729 10:35:11.250109    4497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 10:35:11.255022    4497 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 10:35:11.261777    4497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:35:11.267203    4497 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 10:35:11.336419    4497 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 10:35:11.417412    4497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:35:11.505318    4497 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 10:35:11.511490    4497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:35:11.516056    4497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:35:11.602145    4497 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 10:35:11.640736    4497 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 10:35:11.640802    4497 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 10:35:11.644071    4497 start.go:563] Will wait 60s for crictl version
	I0729 10:35:11.644126    4497 ssh_runner.go:195] Run: which crictl
	I0729 10:35:11.645400    4497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:35:11.656664    4497 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 10:35:11.656733    4497 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:35:11.668612    4497 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:35:11.690455    4497 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 10:35:11.690572    4497 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 10:35:11.692082    4497 kubeadm.go:883] updating cluster {Name:running-upgrade-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 10:35:11.692126    4497 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:35:11.692168    4497 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:35:11.702565    4497 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:35:11.702574    4497 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:35:11.702617    4497 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:35:11.706250    4497 ssh_runner.go:195] Run: which lz4
	I0729 10:35:11.707510    4497 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 10:35:11.708708    4497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:35:11.708719    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 10:35:12.589092    4497 docker.go:649] duration metric: took 881.65175ms to copy over tarball
	I0729 10:35:12.589146    4497 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:35:13.728341    4497 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.139236s)
	I0729 10:35:13.728354    4497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:35:13.743796    4497 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:35:13.746713    4497 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 10:35:13.751929    4497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:35:13.821404    4497 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:35:15.018194    4497 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.196830917s)
	I0729 10:35:15.018303    4497 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:35:15.033218    4497 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:35:15.033227    4497 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:35:15.033233    4497 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 10:35:15.037176    4497 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:35:15.039079    4497 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:35:15.040812    4497 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:35:15.040907    4497 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:35:15.043132    4497 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:35:15.043291    4497 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:35:15.045061    4497 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:35:15.045116    4497 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:35:15.046561    4497 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 10:35:15.046617    4497 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:35:15.048076    4497 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:35:15.048371    4497 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:35:15.049276    4497 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:35:15.049424    4497 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 10:35:15.050224    4497 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:35:15.050850    4497 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:35:15.444177    4497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:35:15.454734    4497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:35:15.457976    4497 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 10:35:15.458004    4497 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:35:15.458048    4497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:35:15.474958    4497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 10:35:15.474958    4497 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 10:35:15.475012    4497 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:35:15.475053    4497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:35:15.481274    4497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:35:15.482590    4497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 10:35:15.485999    4497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 10:35:15.496558    4497 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 10:35:15.496578    4497 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:35:15.496594    4497 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 10:35:15.496604    4497 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 10:35:15.496634    4497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:35:15.496635    4497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 10:35:15.503141    4497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:35:15.509423    4497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 10:35:15.509425    4497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 10:35:15.509541    4497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 10:35:15.523698    4497 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 10:35:15.523718    4497 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:35:15.523700    4497 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 10:35:15.523757    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 10:35:15.523769    4497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:35:15.534228    4497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 10:35:15.538142    4497 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 10:35:15.538152    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 10:35:15.538197    4497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0729 10:35:15.557414    4497 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 10:35:15.557543    4497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:35:15.568359    4497 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 10:35:15.568409    4497 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 10:35:15.568428    4497 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:35:15.568476    4497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 10:35:15.570851    4497 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 10:35:15.570869    4497 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:35:15.570909    4497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:35:15.583038    4497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 10:35:15.583094    4497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 10:35:15.583157    4497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:35:15.583164    4497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:35:15.584757    4497 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 10:35:15.584771    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 10:35:15.584967    4497 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 10:35:15.584984    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0729 10:35:15.633825    4497 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 10:35:15.633960    4497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:35:15.664877    4497 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:35:15.664890    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 10:35:15.672008    4497 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 10:35:15.672029    4497 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:35:15.672091    4497 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:35:15.751136    4497 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 10:35:15.893622    4497 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:35:15.893636    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 10:35:16.936797    4497 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.043176542s)
	I0729 10:35:16.936838    4497 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 10:35:16.936815    4497 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.264766292s)
	I0729 10:35:16.936888    4497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 10:35:16.937331    4497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:35:16.942025    4497 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 10:35:16.942098    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 10:35:17.005494    4497 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:35:17.005534    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 10:35:17.239653    4497 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 10:35:17.239695    4497 cache_images.go:92] duration metric: took 2.206560709s to LoadCachedImages
	W0729 10:35:17.239737    4497 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 10:35:17.239746    4497 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 10:35:17.239797    4497 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-466000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:35:17.239862    4497 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 10:35:17.253410    4497 cni.go:84] Creating CNI manager for ""
	I0729 10:35:17.253424    4497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:35:17.253429    4497 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:35:17.253440    4497 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-466000 NodeName:running-upgrade-466000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:35:17.253509    4497 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-466000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:35:17.253561    4497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 10:35:17.257314    4497 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:35:17.257351    4497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:35:17.260243    4497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 10:35:17.264845    4497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:35:17.269923    4497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 10:35:17.275562    4497 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 10:35:17.276944    4497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:35:17.354510    4497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:35:17.359556    4497 certs.go:68] Setting up /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000 for IP: 10.0.2.15
	I0729 10:35:17.359562    4497 certs.go:194] generating shared ca certs ...
	I0729 10:35:17.359571    4497 certs.go:226] acquiring lock for ca certs: {Name:mk28bd7d778d1316d2729251af42b84d93001f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:17.359728    4497 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.key
	I0729 10:35:17.359780    4497 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.key
	I0729 10:35:17.359788    4497 certs.go:256] generating profile certs ...
	I0729 10:35:17.359848    4497 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/client.key
	I0729 10:35:17.359863    4497 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.key.48a2b039
	I0729 10:35:17.359877    4497 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.crt.48a2b039 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 10:35:17.456849    4497 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.crt.48a2b039 ...
	I0729 10:35:17.456856    4497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.crt.48a2b039: {Name:mka5bf0eaac8299e36abb779c54fbcabd1c0128e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:17.457122    4497 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.key.48a2b039 ...
	I0729 10:35:17.457126    4497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.key.48a2b039: {Name:mk1539bdbac36588c277d1f8569b2bb6bc7c291f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:17.457247    4497 certs.go:381] copying /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.crt.48a2b039 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.crt
	I0729 10:35:17.457379    4497 certs.go:385] copying /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.key.48a2b039 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.key
	I0729 10:35:17.457515    4497 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/proxy-client.key
	I0729 10:35:17.457639    4497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648.pem (1338 bytes)
	W0729 10:35:17.457666    4497 certs.go:480] ignoring /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648_empty.pem, impossibly tiny 0 bytes
	I0729 10:35:17.457670    4497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:35:17.457689    4497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:35:17.457706    4497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:35:17.457724    4497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem (1675 bytes)
	I0729 10:35:17.457764    4497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem (1708 bytes)
	I0729 10:35:17.458073    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:35:17.465835    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 10:35:17.473842    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:35:17.480875    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:35:17.488391    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 10:35:17.495310    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 10:35:17.502183    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:35:17.509229    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:35:17.517164    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648.pem --> /usr/share/ca-certificates/1648.pem (1338 bytes)
	I0729 10:35:17.524280    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem --> /usr/share/ca-certificates/16482.pem (1708 bytes)
	I0729 10:35:17.531090    4497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:35:17.537877    4497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:35:17.543160    4497 ssh_runner.go:195] Run: openssl version
	I0729 10:35:17.545159    4497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16482.pem && ln -fs /usr/share/ca-certificates/16482.pem /etc/ssl/certs/16482.pem"
	I0729 10:35:17.548399    4497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16482.pem
	I0729 10:35:17.549875    4497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:03 /usr/share/ca-certificates/16482.pem
	I0729 10:35:17.549894    4497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16482.pem
	I0729 10:35:17.551893    4497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16482.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:35:17.554601    4497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:35:17.557843    4497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:35:17.559485    4497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:35:17.559504    4497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:35:17.561323    4497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:35:17.564017    4497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1648.pem && ln -fs /usr/share/ca-certificates/1648.pem /etc/ssl/certs/1648.pem"
	I0729 10:35:17.567003    4497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1648.pem
	I0729 10:35:17.568446    4497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:03 /usr/share/ca-certificates/1648.pem
	I0729 10:35:17.568466    4497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1648.pem
	I0729 10:35:17.570308    4497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1648.pem /etc/ssl/certs/51391683.0"
	I0729 10:35:17.573610    4497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:35:17.575143    4497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 10:35:17.577015    4497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 10:35:17.578996    4497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 10:35:17.580721    4497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 10:35:17.582688    4497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 10:35:17.584603    4497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 10:35:17.586441    4497 kubeadm.go:392] StartCluster: {Name:running-upgrade-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:35:17.586502    4497 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:35:17.598427    4497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:35:17.602173    4497 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 10:35:17.602178    4497 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 10:35:17.602202    4497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 10:35:17.605235    4497 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:35:17.605458    4497 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-466000" does not appear in /Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:35:17.605505    4497 kubeconfig.go:62] /Users/jenkins/minikube-integration/19345-1151/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-466000" cluster setting kubeconfig missing "running-upgrade-466000" context setting]
	I0729 10:35:17.605656    4497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/kubeconfig: {Name:mk69e1ff39ac907f2664a3f00c50d678e5bdc356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:17.606353    4497 kapi.go:59] client config for running-upgrade-466000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/client.key", CAFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027180c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:35:17.606699    4497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 10:35:17.609507    4497 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-466000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 10:35:17.609511    4497 kubeadm.go:1160] stopping kube-system containers ...
	I0729 10:35:17.609548    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:35:17.620311    4497 docker.go:483] Stopping containers: [a179c7c0be3b b71c6a144b0b a7b5b56d4643 27436bba39ee a8b8304d388d bbc93ed0c46c 190f5a188a52 c647ccee48a0 a84a5d224074 be7331d69969 217ddd7b537f 50ce727d0c42 353c131d4836]
	I0729 10:35:17.620387    4497 ssh_runner.go:195] Run: docker stop a179c7c0be3b b71c6a144b0b a7b5b56d4643 27436bba39ee a8b8304d388d bbc93ed0c46c 190f5a188a52 c647ccee48a0 a84a5d224074 be7331d69969 217ddd7b537f 50ce727d0c42 353c131d4836
	I0729 10:35:17.631210    4497 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 10:35:17.723592    4497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:35:17.727905    4497 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 29 17:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 17:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 17:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 29 17:34 /etc/kubernetes/scheduler.conf
	
	I0729 10:35:17.727939    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/admin.conf
	I0729 10:35:17.731566    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:35:17.731597    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:35:17.735019    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/kubelet.conf
	I0729 10:35:17.738290    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:35:17.738316    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:35:17.741746    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/controller-manager.conf
	I0729 10:35:17.744887    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:35:17.744910    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:35:17.747901    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/scheduler.conf
	I0729 10:35:17.750585    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:35:17.750607    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:35:17.753618    4497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:35:17.756611    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:35:17.806299    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:35:18.708948    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:35:18.907692    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:35:18.933330    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:35:18.956065    4497 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:35:18.956145    4497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:35:19.458473    4497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:35:19.958187    4497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:35:19.963393    4497 api_server.go:72] duration metric: took 1.007378292s to wait for apiserver process to appear ...
	I0729 10:35:19.963402    4497 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:35:19.963410    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:35:24.965298    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:35:24.965341    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:35:29.965566    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:35:29.965657    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:35:34.966343    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:35:34.966421    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:35:39.967684    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:35:39.967765    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:35:44.969068    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:35:44.969147    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:35:49.970943    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:35:49.971033    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:35:54.973403    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:35:54.973487    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:35:59.975897    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:35:59.975976    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:04.978027    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:04.978108    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:09.980612    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:09.980691    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:14.983153    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:14.983240    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:19.985632    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:19.985909    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:36:20.013819    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:36:20.013958    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:36:20.031203    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:36:20.031283    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:36:20.045739    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:36:20.045811    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:36:20.058015    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:36:20.058083    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:36:20.068112    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:36:20.068179    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:36:20.082321    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:36:20.082385    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:36:20.092393    4497 logs.go:276] 0 containers: []
	W0729 10:36:20.092405    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:36:20.092472    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:36:20.102834    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:36:20.102850    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:36:20.102856    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:36:20.114430    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:36:20.114445    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:36:20.139061    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:36:20.139069    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:36:20.150741    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:36:20.150754    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:36:20.162516    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:36:20.162529    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:36:20.177238    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:36:20.177252    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:36:20.192394    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:36:20.192406    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:36:20.204033    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:36:20.204044    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:36:20.238360    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:36:20.238370    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:36:20.306717    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:36:20.306730    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:36:20.323524    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:36:20.323536    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:36:20.334930    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:36:20.334945    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:36:20.352717    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:36:20.352730    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:36:20.364295    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:36:20.364306    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:36:20.368491    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:36:20.368499    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:36:20.384069    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:36:20.384078    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:36:20.402267    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:36:20.402278    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:36:22.932562    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:27.934491    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:27.934919    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:36:27.974341    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:36:27.974492    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:36:27.992192    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:36:27.992284    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:36:28.005197    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:36:28.005271    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:36:28.017017    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:36:28.017089    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:36:28.027722    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:36:28.027780    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:36:28.038496    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:36:28.038557    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:36:28.048815    4497 logs.go:276] 0 containers: []
	W0729 10:36:28.048829    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:36:28.048885    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:36:28.058851    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:36:28.058868    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:36:28.058873    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:36:28.073804    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:36:28.073816    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:36:28.086063    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:36:28.086072    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:36:28.097714    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:36:28.097727    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:36:28.115162    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:36:28.115172    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:36:28.119672    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:36:28.119680    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:36:28.159114    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:36:28.159124    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:36:28.173557    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:36:28.173569    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:36:28.185116    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:36:28.185128    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:36:28.214849    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:36:28.214859    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:36:28.228950    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:36:28.228961    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:36:28.240119    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:36:28.240132    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:36:28.266104    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:36:28.266114    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:36:28.283181    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:36:28.283192    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:36:28.318999    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:36:28.319009    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:36:28.332899    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:36:28.332911    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:36:28.345040    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:36:28.345051    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:36:30.857194    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:35.859516    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:35.859724    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:36:35.889740    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:36:35.889861    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:36:35.910689    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:36:35.910777    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:36:35.924197    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:36:35.924267    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:36:35.943940    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:36:35.944009    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:36:35.953914    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:36:35.953983    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:36:35.964597    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:36:35.964659    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:36:35.974924    4497 logs.go:276] 0 containers: []
	W0729 10:36:35.974936    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:36:35.974989    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:36:35.985659    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:36:35.985677    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:36:35.985682    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:36:36.011093    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:36:36.011104    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:36:36.025576    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:36:36.025590    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:36:36.045455    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:36:36.045465    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:36:36.060833    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:36:36.060844    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:36:36.095903    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:36:36.095919    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:36:36.110094    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:36:36.110107    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:36:36.121191    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:36:36.121204    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:36:36.132368    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:36:36.132378    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:36:36.158791    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:36:36.158802    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:36:36.194880    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:36:36.194888    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:36:36.209375    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:36:36.209385    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:36:36.220619    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:36:36.220631    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:36:36.231963    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:36:36.231973    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:36:36.243190    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:36:36.243202    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:36:36.247648    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:36:36.247657    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:36:36.259474    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:36:36.259487    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:36:38.778742    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:43.781220    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:43.781682    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:36:43.823694    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:36:43.823814    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:36:43.843179    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:36:43.843270    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:36:43.858111    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:36:43.858189    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:36:43.870443    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:36:43.870512    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:36:43.880904    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:36:43.880971    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:36:43.891709    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:36:43.891778    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:36:43.901767    4497 logs.go:276] 0 containers: []
	W0729 10:36:43.901781    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:36:43.901837    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:36:43.916220    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:36:43.916239    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:36:43.916245    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:36:43.929296    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:36:43.929306    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:36:43.955105    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:36:43.955115    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:36:43.991146    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:36:43.991163    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:36:44.028193    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:36:44.028206    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:36:44.041981    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:36:44.041995    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:36:44.055523    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:36:44.055535    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:36:44.070897    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:36:44.070909    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:36:44.075374    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:36:44.075382    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:36:44.087697    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:36:44.087713    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:36:44.100404    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:36:44.100414    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:36:44.114866    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:36:44.114879    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:36:44.130181    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:36:44.130190    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:36:44.155342    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:36:44.155353    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:36:44.166525    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:36:44.166536    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:36:44.178162    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:36:44.178175    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:36:44.190815    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:36:44.190830    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:36:46.708672    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:51.711141    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:51.711482    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:36:51.741985    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:36:51.742102    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:36:51.760914    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:36:51.761011    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:36:51.775074    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:36:51.775150    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:36:51.786877    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:36:51.786949    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:36:51.797829    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:36:51.797896    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:36:51.808361    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:36:51.808429    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:36:51.818566    4497 logs.go:276] 0 containers: []
	W0729 10:36:51.818576    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:36:51.818629    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:36:51.828939    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:36:51.828956    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:36:51.828962    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:36:51.855603    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:36:51.855616    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:36:51.870794    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:36:51.870806    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:36:51.887029    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:36:51.887040    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:36:51.904101    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:36:51.904117    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:36:51.915339    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:36:51.915348    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:36:51.926851    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:36:51.926869    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:36:51.938822    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:36:51.938832    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:36:51.949936    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:36:51.949948    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:36:51.984031    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:36:51.984038    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:36:51.987850    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:36:51.987856    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:36:52.001831    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:36:52.001841    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:36:52.019000    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:36:52.019010    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:36:52.045114    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:36:52.045122    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:36:52.081262    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:36:52.081274    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:36:52.095746    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:36:52.095757    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:36:52.112696    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:36:52.112708    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:36:54.626102    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:36:59.628676    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:36:59.628949    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:36:59.659553    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:36:59.659624    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:36:59.677074    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:36:59.677154    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:36:59.692581    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:36:59.692653    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:36:59.703364    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:36:59.703423    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:36:59.713901    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:36:59.713969    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:36:59.724475    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:36:59.724538    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:36:59.735037    4497 logs.go:276] 0 containers: []
	W0729 10:36:59.735049    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:36:59.735106    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:36:59.745596    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:36:59.745617    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:36:59.745623    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:36:59.760510    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:36:59.760521    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:36:59.772552    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:36:59.772563    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:36:59.790935    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:36:59.790947    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:36:59.804583    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:36:59.804594    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:36:59.822619    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:36:59.822631    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:36:59.840437    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:36:59.840447    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:36:59.853289    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:36:59.853303    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:36:59.865583    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:36:59.865594    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:36:59.903704    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:36:59.903717    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:36:59.918470    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:36:59.918485    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:36:59.945030    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:36:59.945045    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:36:59.950245    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:36:59.950253    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:36:59.975951    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:36:59.975961    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:36:59.987946    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:36:59.987956    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:36:59.999786    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:36:59.999799    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:00.011331    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:00.011343    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:02.549178    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:37:07.551820    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:37:07.551970    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:37:07.563819    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:37:07.563899    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:37:07.574727    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:37:07.574803    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:37:07.585206    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:37:07.585269    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:37:07.596056    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:37:07.596123    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:37:07.606555    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:07.606621    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:07.617630    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:07.617698    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:07.628179    4497 logs.go:276] 0 containers: []
	W0729 10:37:07.628190    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:07.628246    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:07.643404    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:07.643423    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:07.643429    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:07.681857    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:07.681869    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:07.707544    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:07.707555    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:07.722221    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:07.722232    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:07.738102    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:07.738113    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:07.750159    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:07.750171    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:07.764468    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:07.764479    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:07.782427    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:07.782448    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:07.793816    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:07.793828    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:07.805427    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:07.805439    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:07.823672    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:07.823683    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:07.849289    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:07.849296    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:07.861007    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:07.861017    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:07.865588    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:07.865595    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:07.877902    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:07.877914    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:07.889312    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:07.889323    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:07.924932    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:07.924940    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:10.440580    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:37:15.442594    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:37:15.442763    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:37:15.454700    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:37:15.454775    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:37:15.465407    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:37:15.465480    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:37:15.478391    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:37:15.478466    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:37:15.489208    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:37:15.489277    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:37:15.499482    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:15.499540    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:15.510292    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:15.510353    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:15.521903    4497 logs.go:276] 0 containers: []
	W0729 10:37:15.521914    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:15.521967    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:15.532640    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:15.532660    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:15.532666    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:15.572222    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:15.572245    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:15.577321    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:15.577331    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:15.591846    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:15.591861    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:15.603692    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:15.603705    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:15.616163    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:15.616177    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:15.629001    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:15.629016    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:15.656444    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:15.656466    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:15.672769    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:15.672781    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:15.684607    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:15.684619    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:15.700987    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:15.701003    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:15.717135    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:15.717148    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:15.735827    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:15.735839    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:15.754804    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:15.754820    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:15.766682    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:15.766696    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:15.804759    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:15.804773    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:15.823256    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:15.823271    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:18.351175    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:37:23.353259    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:37:23.353403    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:37:23.400083    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:37:23.400162    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:37:23.411903    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:37:23.411979    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:37:23.422126    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:37:23.422198    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:37:23.432000    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:37:23.432071    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:37:23.442801    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:23.442877    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:23.453270    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:23.453336    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:23.463379    4497 logs.go:276] 0 containers: []
	W0729 10:37:23.463389    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:23.463445    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:23.475179    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:23.475197    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:23.475202    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:23.490962    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:23.490974    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:23.515160    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:23.515173    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:23.551820    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:23.551831    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:23.562757    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:23.562770    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:23.578333    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:23.578348    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:23.589774    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:23.589784    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:23.594168    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:23.594174    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:23.606002    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:23.606016    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:23.617123    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:23.617135    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:23.641070    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:23.641078    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:23.676794    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:23.676804    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:23.691886    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:23.691897    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:23.706645    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:23.706658    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:23.723711    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:23.723723    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:23.735297    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:23.735308    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:23.747442    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:23.747453    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:26.274715    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:37:31.276670    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:37:31.276777    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:37:31.295777    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:37:31.295855    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:37:31.307029    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:37:31.307117    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:37:31.321427    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:37:31.321504    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:37:31.341861    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:37:31.341933    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:37:31.352782    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:31.352850    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:31.363755    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:31.363826    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:31.374106    4497 logs.go:276] 0 containers: []
	W0729 10:37:31.374118    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:31.374177    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:31.385088    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:31.385106    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:31.385112    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:31.421777    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:31.421789    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:31.438505    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:31.438519    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:31.453112    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:31.453124    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:31.465808    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:31.465823    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:31.484855    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:31.484872    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:31.497209    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:31.497220    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:31.509408    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:31.509424    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:31.522497    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:31.522509    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:31.561587    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:31.561607    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:31.567133    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:31.567145    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:31.593446    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:31.593459    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:31.605565    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:31.605577    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:31.622182    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:31.622194    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:31.635073    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:31.635085    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:31.660916    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:31.660932    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:31.676129    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:31.676140    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:34.191222    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:37:39.192692    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:37:39.193095    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:37:39.234946    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:37:39.235091    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:37:39.258218    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:37:39.258320    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:37:39.272412    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:37:39.272486    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:37:39.286451    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:37:39.286527    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:37:39.297089    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:39.297158    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:39.307314    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:39.307384    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:39.317274    4497 logs.go:276] 0 containers: []
	W0729 10:37:39.317286    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:39.317338    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:39.332155    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:39.332175    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:39.332180    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:39.343542    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:39.343555    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:39.354926    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:39.354937    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:39.366708    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:39.366722    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:39.387161    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:39.387174    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:39.422183    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:39.422190    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:39.433007    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:39.433018    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:39.448240    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:39.448250    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:39.459929    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:39.459942    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:39.478054    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:39.478063    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:39.506369    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:39.506385    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:39.511215    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:39.511222    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:39.540470    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:39.540487    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:39.558818    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:39.558828    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:39.574171    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:39.574182    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:39.614085    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:39.614096    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:39.627782    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:39.627791    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:42.148921    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:37:47.151437    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:37:47.151830    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:37:47.190240    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:37:47.190357    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:37:47.209074    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:37:47.209147    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:37:47.221935    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:37:47.221994    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:37:47.237246    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:37:47.237349    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:37:47.247871    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:47.247938    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:47.258934    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:47.259005    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:47.268953    4497 logs.go:276] 0 containers: []
	W0729 10:37:47.268964    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:47.269024    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:47.279381    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:47.279401    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:47.279406    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:47.305516    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:47.305530    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:47.319488    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:47.319498    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:47.356249    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:47.356257    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:47.374200    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:47.374209    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:47.385951    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:47.385961    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:47.397133    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:47.397143    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:47.422632    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:47.422640    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:47.427197    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:47.427206    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:47.441307    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:47.441317    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:47.459851    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:47.459865    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:47.475372    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:47.475400    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:47.497121    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:47.497132    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:47.509278    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:47.509290    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:47.545841    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:47.545853    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:47.558849    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:47.558863    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:47.571154    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:47.571166    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:50.085546    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:37:55.088013    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:37:55.088148    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:37:55.099785    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:37:55.099859    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:37:55.111074    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:37:55.111148    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:37:55.122076    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:37:55.122132    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:37:55.134301    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:37:55.134360    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:37:55.145691    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:55.145750    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:55.156362    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:55.156430    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:55.166864    4497 logs.go:276] 0 containers: []
	W0729 10:37:55.166877    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:55.166934    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:55.177702    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:55.177718    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:55.177723    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:55.182361    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:55.182367    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:55.194195    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:55.194205    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:55.206444    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:55.206456    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:55.244096    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:55.244106    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:55.257955    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:55.257968    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:55.283785    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:55.283795    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:55.295693    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:55.295708    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:55.307613    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:55.307627    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:55.331653    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:55.331660    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:55.367672    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:55.367683    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:55.382713    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:55.382724    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:55.402001    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:55.402012    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:55.413903    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:55.413914    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:55.429050    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:55.429063    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:55.440388    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:55.440400    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:55.457809    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:55.457820    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:57.974159    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:02.976215    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:02.976383    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:02.991959    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:02.992039    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:03.004050    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:03.004119    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:03.014153    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:03.014225    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:03.024531    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:03.024603    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:03.035350    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:03.035419    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:03.050217    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:03.050289    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:03.060354    4497 logs.go:276] 0 containers: []
	W0729 10:38:03.060370    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:03.060422    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:03.077507    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:03.077525    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:03.077530    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:03.092299    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:03.092311    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:03.103599    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:03.103609    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:03.118422    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:03.118433    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:03.129631    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:03.129643    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:03.154166    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:03.154177    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:03.168233    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:03.168246    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:03.202661    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:03.202668    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:03.216954    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:03.216966    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:03.234794    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:03.234806    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:03.246382    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:03.246396    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:03.257975    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:03.257987    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:03.292748    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:03.292761    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:03.315444    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:03.315456    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:03.339773    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:03.339782    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:03.351171    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:03.351183    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:03.355296    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:03.355304    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:05.872495    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:10.874859    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:10.875165    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:10.900951    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:10.901066    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:10.919380    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:10.919465    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:10.932713    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:10.932794    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:10.948640    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:10.948713    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:10.960423    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:10.960495    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:10.972056    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:10.972131    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:10.983236    4497 logs.go:276] 0 containers: []
	W0729 10:38:10.983250    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:10.983315    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:10.996022    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:10.996042    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:10.996048    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:11.008119    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:11.008132    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:11.033281    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:11.033292    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:11.070841    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:11.070853    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:11.085146    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:11.085165    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:11.096851    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:11.096865    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:11.117846    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:11.117858    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:11.136079    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:11.136093    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:11.179307    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:11.179319    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:11.193804    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:11.193816    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:11.220575    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:11.220584    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:11.232714    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:11.232726    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:11.244210    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:11.244220    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:11.257122    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:11.257135    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:11.262190    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:11.262198    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:11.275854    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:11.275865    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:11.287681    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:11.287693    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:13.802385    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:18.804362    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:18.804523    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:18.815660    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:18.815727    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:18.826772    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:18.826831    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:18.837496    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:18.837559    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:18.854502    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:18.854574    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:18.869842    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:18.869905    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:18.881259    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:18.881325    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:18.895484    4497 logs.go:276] 0 containers: []
	W0729 10:38:18.895494    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:18.895545    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:18.906735    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:18.906751    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:18.906771    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:18.911193    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:18.911200    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:18.922959    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:18.922969    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:18.935516    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:18.935527    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:18.951356    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:18.951373    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:18.963572    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:18.963583    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:18.975383    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:18.975394    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:18.993559    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:18.993571    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:19.040399    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:19.040410    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:19.055082    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:19.055097    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:19.083965    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:19.083995    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:19.096534    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:19.096548    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:19.121924    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:19.121944    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:19.134204    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:19.134217    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:19.172503    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:19.172527    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:19.188811    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:19.188826    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:19.205237    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:19.205250    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:21.725209    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:26.727306    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:26.727644    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:26.762125    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:26.762229    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:26.780063    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:26.780143    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:26.793560    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:26.793638    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:26.805269    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:26.805338    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:26.815912    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:26.815975    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:26.826059    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:26.826127    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:26.836156    4497 logs.go:276] 0 containers: []
	W0729 10:38:26.836169    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:26.836226    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:26.846486    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:26.846503    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:26.846508    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:26.857488    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:26.857499    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:26.893507    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:26.893519    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:26.907145    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:26.907158    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:26.932381    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:26.932392    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:26.944192    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:26.944202    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:26.956368    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:26.956380    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:26.969534    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:26.969546    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:26.993183    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:26.993197    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:27.031300    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:27.031313    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:27.035568    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:27.035575    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:27.046903    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:27.046916    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:27.062371    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:27.062382    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:27.084122    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:27.084134    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:27.095830    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:27.095841    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:27.109504    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:27.109514    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:27.124623    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:27.124634    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:29.638162    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:34.640218    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:34.640329    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:34.651315    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:34.651384    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:34.662279    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:34.662347    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:34.672530    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:34.672600    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:34.687285    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:34.687357    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:34.697909    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:34.697975    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:34.717805    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:34.717874    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:34.728962    4497 logs.go:276] 0 containers: []
	W0729 10:38:34.728974    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:34.729033    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:34.739923    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:34.739948    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:34.739954    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:34.754785    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:34.754800    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:34.766692    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:34.766703    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:34.792915    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:34.792928    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:34.805459    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:34.805470    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:34.828056    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:34.828067    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:34.845310    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:34.845320    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:34.860966    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:34.860977    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:34.896625    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:34.896640    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:34.922619    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:34.922632    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:34.934669    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:34.934681    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:34.946778    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:34.946790    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:34.951261    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:34.951270    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:34.988797    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:34.988811    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:35.006505    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:35.006515    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:35.017775    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:35.017788    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:35.031179    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:35.031191    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:37.544879    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:42.545922    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:42.546195    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:42.572751    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:42.572871    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:42.590165    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:42.590246    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:42.603054    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:42.603125    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:42.614783    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:42.614856    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:42.629636    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:42.629709    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:42.640944    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:42.641022    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:42.651350    4497 logs.go:276] 0 containers: []
	W0729 10:38:42.651361    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:42.651418    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:42.662008    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:42.662026    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:42.662031    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:42.676351    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:42.676362    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:42.691020    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:42.691029    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:42.708663    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:42.708673    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:42.732465    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:42.732475    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:42.769599    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:42.769608    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:42.822652    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:42.822666    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:42.841136    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:42.841151    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:42.857689    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:42.857704    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:42.869284    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:42.869297    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:42.880978    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:42.880991    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:42.892532    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:42.892543    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:42.904369    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:42.904382    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:42.918407    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:42.918417    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:42.933576    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:42.933584    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:42.945138    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:42.945149    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:42.949501    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:42.949508    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:45.477351    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:50.479488    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:50.479811    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:50.503601    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:50.503701    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:50.519109    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:50.519192    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:50.531684    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:50.531760    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:50.542488    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:50.542559    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:50.552931    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:50.553001    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:50.563622    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:50.563694    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:50.574800    4497 logs.go:276] 0 containers: []
	W0729 10:38:50.574813    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:50.574871    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:50.585080    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:50.585096    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:50.585102    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:50.599033    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:50.599043    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:50.616608    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:50.616619    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:50.628866    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:50.628878    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:50.640290    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:50.640302    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:50.676320    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:50.676330    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:50.690847    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:50.690860    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:50.704081    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:50.704093    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:50.720855    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:50.720867    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:50.732322    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:50.732333    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:50.744231    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:50.744241    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:50.755396    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:50.755406    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:50.773138    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:50.773151    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:50.796845    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:50.796852    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:50.808492    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:50.808503    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:50.812702    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:50.812709    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:50.846783    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:50.846795    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:53.378085    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:58.380691    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:58.381187    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:58.418210    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:58.418336    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:58.439812    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:58.439907    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:58.454739    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:58.454819    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:58.469525    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:58.469605    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:58.480190    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:58.480260    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:58.490863    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:58.490932    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:58.501732    4497 logs.go:276] 0 containers: []
	W0729 10:38:58.501741    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:58.501795    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:58.513573    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:58.513590    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:58.513596    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:58.539414    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:58.539425    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:58.554311    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:58.554321    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:58.569198    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:58.569212    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:58.581414    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:58.581425    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:58.593782    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:58.593794    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:58.631782    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:58.631793    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:58.643735    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:58.643746    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:58.655913    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:58.655923    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:58.667655    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:58.667665    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:58.672032    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:58.672040    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:58.687401    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:58.687412    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:58.699531    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:58.699542    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:58.723342    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:58.723351    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:58.759876    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:58.759885    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:58.773671    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:58.773683    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:58.787310    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:58.787322    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:39:01.306807    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:06.309109    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:06.309283    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:06.320951    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:39:06.321024    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:06.332057    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:39:06.332127    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:06.342763    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:39:06.342834    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:06.353262    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:39:06.353328    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:06.363739    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:39:06.363804    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:06.374311    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:39:06.374384    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:06.384260    4497 logs.go:276] 0 containers: []
	W0729 10:39:06.384271    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:06.384328    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:06.394714    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:39:06.394730    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:39:06.394736    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:39:06.412003    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:06.412014    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:06.435729    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:39:06.435737    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:39:06.450139    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:39:06.450149    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:39:06.461683    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:39:06.461695    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:39:06.473999    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:39:06.474011    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:06.486272    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:39:06.486284    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:39:06.511446    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:39:06.511456    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:39:06.525732    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:39:06.525743    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:39:06.538317    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:39:06.538327    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:39:06.549566    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:39:06.549578    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:39:06.560821    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:39:06.560833    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:39:06.575845    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:39:06.575856    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:39:06.587330    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:06.587341    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:06.624100    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:06.624108    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:06.628128    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:06.628134    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:06.661856    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:39:06.661867    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:39:09.177066    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:14.179088    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:14.179197    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:14.192463    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:39:14.192537    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:14.204152    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:39:14.204233    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:14.215258    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:39:14.215332    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:14.226795    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:39:14.226870    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:14.237095    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:39:14.237163    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:14.247699    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:39:14.247770    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:14.258251    4497 logs.go:276] 0 containers: []
	W0729 10:39:14.258264    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:14.258325    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:14.268871    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:39:14.268893    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:39:14.268900    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:39:14.280113    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:39:14.280127    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:39:14.305596    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:39:14.305607    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:39:14.323383    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:39:14.323396    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:39:14.339656    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:39:14.339667    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:39:14.353671    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:39:14.353683    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:39:14.370777    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:14.370789    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:14.405968    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:39:14.405982    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:39:14.421444    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:39:14.421454    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:39:14.434536    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:39:14.434548    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:14.446956    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:39:14.446967    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:39:14.458876    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:14.458887    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:14.463420    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:39:14.463429    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:39:14.482780    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:39:14.482792    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:39:14.494681    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:39:14.494691    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:39:14.508343    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:14.508356    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:14.531645    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:14.531656    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:17.069410    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:22.071504    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:22.071596    4497 kubeadm.go:597] duration metric: took 4m4.481038667s to restartPrimaryControlPlane
	W0729 10:39:22.071671    4497 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 10:39:22.071701    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 10:39:23.041381    4497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:39:23.046361    4497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:39:23.049407    4497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:39:23.052171    4497 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:39:23.052177    4497 kubeadm.go:157] found existing configuration files:
	
	I0729 10:39:23.052199    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/admin.conf
	I0729 10:39:23.054753    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:39:23.054773    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:39:23.057866    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/kubelet.conf
	I0729 10:39:23.061208    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:39:23.061245    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:39:23.063995    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/controller-manager.conf
	I0729 10:39:23.066441    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:39:23.066461    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:39:23.069111    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/scheduler.conf
	I0729 10:39:23.071877    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:39:23.071897    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:39:23.074366    4497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:39:23.091387    4497 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 10:39:23.091490    4497 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:39:23.144003    4497 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:39:23.144065    4497 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:39:23.144147    4497 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:39:23.195164    4497 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:39:23.200330    4497 out.go:204]   - Generating certificates and keys ...
	I0729 10:39:23.200365    4497 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:39:23.200456    4497 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:39:23.200490    4497 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 10:39:23.200514    4497 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 10:39:23.200565    4497 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 10:39:23.200597    4497 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 10:39:23.200638    4497 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 10:39:23.200689    4497 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 10:39:23.200754    4497 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 10:39:23.200897    4497 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 10:39:23.201089    4497 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 10:39:23.201123    4497 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:39:23.560198    4497 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:39:23.728242    4497 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:39:23.771253    4497 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:39:23.883203    4497 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:39:23.913069    4497 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:39:23.913453    4497 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:39:23.913477    4497 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:39:23.999946    4497 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:39:24.003916    4497 out.go:204]   - Booting up control plane ...
	I0729 10:39:24.003968    4497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:39:24.004012    4497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:39:24.004062    4497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:39:24.004102    4497 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:39:24.004269    4497 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 10:39:28.501590    4497 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502104 seconds
	I0729 10:39:28.501733    4497 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:39:28.505893    4497 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:39:29.026617    4497 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:39:29.027017    4497 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-466000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:39:29.529954    4497 kubeadm.go:310] [bootstrap-token] Using token: y3iu0w.0sj8j61agh78ao9n
	I0729 10:39:29.536412    4497 out.go:204]   - Configuring RBAC rules ...
	I0729 10:39:29.536474    4497 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:39:29.536526    4497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:39:29.539975    4497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:39:29.540824    4497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:39:29.541596    4497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:39:29.542483    4497 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:39:29.545688    4497 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:39:29.732347    4497 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:39:29.934055    4497 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:39:29.934487    4497 kubeadm.go:310] 
	I0729 10:39:29.934526    4497 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:39:29.934532    4497 kubeadm.go:310] 
	I0729 10:39:29.934566    4497 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:39:29.934570    4497 kubeadm.go:310] 
	I0729 10:39:29.934581    4497 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:39:29.934608    4497 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:39:29.934640    4497 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:39:29.934648    4497 kubeadm.go:310] 
	I0729 10:39:29.934676    4497 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:39:29.934679    4497 kubeadm.go:310] 
	I0729 10:39:29.934709    4497 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:39:29.934713    4497 kubeadm.go:310] 
	I0729 10:39:29.934745    4497 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:39:29.934782    4497 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:39:29.934835    4497 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:39:29.934840    4497 kubeadm.go:310] 
	I0729 10:39:29.934896    4497 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:39:29.934938    4497 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:39:29.934941    4497 kubeadm.go:310] 
	I0729 10:39:29.934991    4497 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y3iu0w.0sj8j61agh78ao9n \
	I0729 10:39:29.935043    4497 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e543544bbdf55d58d5e8ecb84a321dadc33a389aefb88a9b79f2e5e89d2eeaba \
	I0729 10:39:29.935060    4497 kubeadm.go:310] 	--control-plane 
	I0729 10:39:29.935063    4497 kubeadm.go:310] 
	I0729 10:39:29.935109    4497 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:39:29.935112    4497 kubeadm.go:310] 
	I0729 10:39:29.935160    4497 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y3iu0w.0sj8j61agh78ao9n \
	I0729 10:39:29.935216    4497 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e543544bbdf55d58d5e8ecb84a321dadc33a389aefb88a9b79f2e5e89d2eeaba 
	I0729 10:39:29.935725    4497 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:39:29.935783    4497 cni.go:84] Creating CNI manager for ""
	I0729 10:39:29.935792    4497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:39:29.940058    4497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:39:29.943845    4497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:39:29.946628    4497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:39:29.951251    4497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:39:29.951292    4497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:29.951322    4497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-466000 minikube.k8s.io/updated_at=2024_07_29T10_39_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=running-upgrade-466000 minikube.k8s.io/primary=true
	I0729 10:39:29.982662    4497 kubeadm.go:1113] duration metric: took 31.40275ms to wait for elevateKubeSystemPrivileges
	I0729 10:39:29.982702    4497 ops.go:34] apiserver oom_adj: -16
	I0729 10:39:30.000099    4497 kubeadm.go:394] duration metric: took 4m12.425660375s to StartCluster
	I0729 10:39:30.000118    4497 settings.go:142] acquiring lock: {Name:mk00a8a4362ef98c344b6c02e7313691374680e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:30.000205    4497 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:39:30.000590    4497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/kubeconfig: {Name:mk69e1ff39ac907f2664a3f00c50d678e5bdc356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:30.000796    4497 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:39:30.000808    4497 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:39:30.000840    4497 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-466000"
	I0729 10:39:30.000864    4497 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-466000"
	I0729 10:39:30.000867    4497 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-466000"
	W0729 10:39:30.000869    4497 addons.go:243] addon storage-provisioner should already be in state true
	I0729 10:39:30.000875    4497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-466000"
	I0729 10:39:30.000878    4497 config.go:182] Loaded profile config "running-upgrade-466000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:39:30.000881    4497 host.go:66] Checking if "running-upgrade-466000" exists ...
	I0729 10:39:30.001790    4497 kapi.go:59] client config for running-upgrade-466000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/client.key", CAFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027180c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:39:30.001909    4497 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-466000"
	W0729 10:39:30.001913    4497 addons.go:243] addon default-storageclass should already be in state true
	I0729 10:39:30.001920    4497 host.go:66] Checking if "running-upgrade-466000" exists ...
	I0729 10:39:30.004892    4497 out.go:177] * Verifying Kubernetes components...
	I0729 10:39:30.005261    4497 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:39:30.009209    4497 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:39:30.009216    4497 sshutil.go:53] new ssh client: &{IP:localhost Port:50276 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/running-upgrade-466000/id_rsa Username:docker}
	I0729 10:39:30.012788    4497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:39:30.016833    4497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:39:30.022829    4497 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:39:30.022837    4497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:39:30.022844    4497 sshutil.go:53] new ssh client: &{IP:localhost Port:50276 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/running-upgrade-466000/id_rsa Username:docker}
	I0729 10:39:30.107398    4497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:39:30.112657    4497 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:39:30.112705    4497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:39:30.116514    4497 api_server.go:72] duration metric: took 115.713625ms to wait for apiserver process to appear ...
	I0729 10:39:30.116523    4497 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:39:30.116529    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:30.130753    4497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:39:30.190164    4497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:39:35.117970    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:35.117993    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:40.118120    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:40.118149    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:45.118130    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:45.118157    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:50.118383    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:50.118463    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:55.118656    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:55.118701    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:00.119399    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:00.119427    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 10:40:00.481117    4497 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 10:40:00.484757    4497 out.go:177] * Enabled addons: storage-provisioner
	I0729 10:40:00.496696    4497 addons.go:510] duration metric: took 30.4973465s for enable addons: enabled=[storage-provisioner]
	I0729 10:40:05.120029    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:05.120071    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:10.120957    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:10.121009    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:15.122234    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:15.122296    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:20.123985    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:20.124014    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:25.126077    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:25.126142    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:30.128267    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:30.128355    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:30.146765    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:40:30.146835    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:30.168665    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:40:30.168749    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:30.179386    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:40:30.179454    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:30.189942    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:40:30.190010    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:30.200575    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:40:30.200638    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:30.211345    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:40:30.211407    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:30.221428    4497 logs.go:276] 0 containers: []
	W0729 10:40:30.221438    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:30.221491    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:30.231494    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:40:30.231508    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:30.231513    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:40:30.264173    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:40:30.264269    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:40:30.265620    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:30.265628    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:30.270509    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:40:30.270516    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:40:30.284485    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:40:30.284496    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:40:30.295711    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:40:30.295724    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:40:30.308702    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:40:30.308716    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:40:30.319869    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:30.319883    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:30.343054    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:30.343063    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:30.380148    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:40:30.380162    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:40:30.393964    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:40:30.393975    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:40:30.405617    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:40:30.405630    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:40:30.420694    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:40:30.420711    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:40:30.438921    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:40:30.438932    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:30.450932    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:30.450946    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:40:30.450972    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:40:30.450978    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:40:30.450983    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:40:30.450989    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:30.450991    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:40:40.454338    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:45.455641    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:45.455847    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:45.474099    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:40:45.474191    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:45.488145    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:40:45.488221    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:45.500722    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:40:45.500790    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:45.512973    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:40:45.513045    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:45.524607    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:40:45.524676    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:45.536771    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:40:45.536838    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:45.548269    4497 logs.go:276] 0 containers: []
	W0729 10:40:45.548281    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:45.548339    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:45.559615    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:40:45.559633    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:45.559639    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:45.597190    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:40:45.597200    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:40:45.609977    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:40:45.609989    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:40:45.622464    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:40:45.622477    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:40:45.638872    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:40:45.638888    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:40:45.658188    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:45.658199    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:45.683750    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:45.683769    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:40:45.717630    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:40:45.717729    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:40:45.719170    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:45.719175    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:45.723937    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:40:45.723949    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:40:45.740460    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:40:45.740473    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:40:45.755292    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:40:45.755301    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:40:45.773978    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:40:45.773987    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:40:45.786645    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:40:45.786653    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:45.798713    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:45.798726    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:40:45.798752    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:40:45.798756    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:40:45.798760    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:40:45.798764    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:45.798766    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:40:55.868658    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:00.870738    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:00.871161    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:00.913666    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:41:00.913794    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:00.941785    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:41:00.941865    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:00.954782    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:41:00.954861    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:00.966406    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:41:00.966473    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:00.977443    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:41:00.977518    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:00.992316    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:41:00.992377    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:01.003143    4497 logs.go:276] 0 containers: []
	W0729 10:41:01.003155    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:01.003215    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:01.014026    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:41:01.014040    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:01.014046    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:41:01.047529    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:01.047620    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:01.048981    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:01.048987    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:01.053237    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:01.053246    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:01.087605    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:41:01.087615    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:41:01.099555    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:41:01.099568    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:41:01.110994    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:41:01.111005    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:41:01.128645    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:41:01.128657    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:41:01.140817    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:01.140828    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:01.163857    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:41:01.163866    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:01.175707    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:41:01.175719    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:41:01.190328    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:41:01.190338    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:41:01.204299    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:41:01.204310    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:41:01.216101    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:41:01.216110    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:41:01.230749    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:01.230760    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:41:01.230787    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:41:01.230791    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:01.230794    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:01.230810    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:01.230814    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:11.234714    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:16.236442    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:16.236811    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:16.268862    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:41:16.268992    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:16.288055    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:41:16.288145    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:16.302121    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:41:16.302187    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:16.313493    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:41:16.313569    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:16.324182    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:41:16.324255    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:16.334720    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:41:16.334786    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:16.352865    4497 logs.go:276] 0 containers: []
	W0729 10:41:16.352876    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:16.352941    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:16.363846    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:41:16.363862    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:16.363869    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:16.398681    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:41:16.398695    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:41:16.416337    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:16.416350    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:16.441280    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:41:16.441289    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:41:16.455505    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:41:16.455516    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:41:16.467635    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:41:16.467650    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:41:16.479491    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:41:16.479503    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:41:16.495648    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:41:16.495658    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:41:16.514067    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:16.514081    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:41:16.546347    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:16.546439    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:16.547777    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:16.547782    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:16.552292    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:41:16.552297    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:41:16.566357    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:41:16.566369    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:41:16.578290    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:41:16.578300    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:16.590342    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:16.590353    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:41:16.590378    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:41:16.590383    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:16.590400    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:16.590404    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:16.590407    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:26.594370    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:31.596453    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:31.596653    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:31.615911    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:41:31.616000    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:31.630476    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:41:31.630550    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:31.643008    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:41:31.643074    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:31.657579    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:41:31.657650    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:31.668928    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:41:31.668991    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:31.680736    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:41:31.680804    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:31.693355    4497 logs.go:276] 0 containers: []
	W0729 10:41:31.693364    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:31.693417    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:31.703673    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:41:31.703690    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:41:31.703696    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:41:31.717621    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:41:31.717632    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:41:31.734842    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:41:31.734854    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:41:31.748349    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:41:31.748361    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:41:31.772027    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:41:31.772038    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:41:31.783589    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:31.783601    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:41:31.815454    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:31.815547    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:31.816891    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:31.816903    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:31.821046    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:41:31.821054    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:41:31.839143    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:41:31.839154    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:31.850270    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:31.850285    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:31.875642    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:31.875654    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:31.910412    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:41:31.910426    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:41:31.922184    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:41:31.922196    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:41:31.936714    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:31.936727    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:41:31.936752    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:41:31.936757    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:31.936768    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:31.936773    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:31.936776    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:41.940634    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:46.942851    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:46.943290    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:46.983594    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:41:46.983735    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:47.005221    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:41:47.005340    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:47.027301    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:41:47.027383    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:47.039442    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:41:47.039507    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:47.050576    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:41:47.050645    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:47.061838    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:41:47.061906    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:47.071867    4497 logs.go:276] 0 containers: []
	W0729 10:41:47.071878    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:47.071938    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:47.082579    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:41:47.082595    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:41:47.082599    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:47.095394    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:47.095405    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:41:47.128517    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:47.128608    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:47.129965    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:47.129971    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:47.134737    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:47.134744    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:47.169236    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:41:47.169248    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:41:47.181723    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:41:47.181734    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:41:47.200833    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:41:47.200842    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:41:47.215387    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:41:47.215397    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:41:47.230564    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:41:47.230575    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:41:47.242679    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:41:47.242693    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:41:47.256962    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:41:47.256971    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:41:47.269286    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:41:47.269298    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:41:47.281701    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:41:47.281713    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:41:47.295736    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:41:47.295747    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:41:47.313367    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:47.313379    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:47.337410    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:47.337418    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:41:47.337441    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:41:47.337446    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:47.337449    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:47.337453    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:47.337456    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:57.341248    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:02.343392    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:02.343802    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:02.378751    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:42:02.378879    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:02.405112    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:42:02.405189    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:02.420216    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:42:02.420293    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:02.431912    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:42:02.431978    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:02.442352    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:42:02.442423    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:02.453186    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:42:02.453258    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:02.463854    4497 logs.go:276] 0 containers: []
	W0729 10:42:02.463864    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:02.463922    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:02.474795    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:42:02.474813    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:42:02.474819    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:42:02.494525    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:42:02.494534    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:42:02.512490    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:02.512501    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:02.537735    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:42:02.537743    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:02.549930    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:02.549941    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:02.554630    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:42:02.554639    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:42:02.569385    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:42:02.569396    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:42:02.581246    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:02.581260    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:42:02.615446    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:02.615544    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:02.616934    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:42:02.616940    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:42:02.629108    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:42:02.629121    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:42:02.640459    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:42:02.640469    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:42:02.654560    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:42:02.654574    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:42:02.666867    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:42:02.666881    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:42:02.679221    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:02.679236    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:02.715330    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:42:02.715345    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:42:02.726535    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:02.726546    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:42:02.726573    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:42:02.726580    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:02.726583    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:02.726621    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:02.726638    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:12.728641    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:17.730705    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:17.730850    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:17.743893    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:42:17.743963    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:17.755486    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:42:17.755553    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:17.766338    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:42:17.766412    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:17.777643    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:42:17.777708    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:17.789032    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:42:17.789102    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:17.799890    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:42:17.799958    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:17.810098    4497 logs.go:276] 0 containers: []
	W0729 10:42:17.810109    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:17.810163    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:17.820473    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:42:17.820492    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:17.820497    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:42:17.852783    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:17.852875    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:17.854212    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:42:17.854217    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:42:17.865586    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:42:17.865598    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:17.878501    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:17.878517    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:17.914103    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:42:17.914116    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:42:17.928114    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:42:17.928124    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:42:17.942416    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:42:17.942429    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:42:17.959915    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:17.959923    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:17.964642    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:42:17.964648    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:42:17.981734    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:42:17.981743    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:42:17.994171    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:42:17.994181    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:42:18.006008    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:42:18.006018    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:42:18.021721    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:42:18.021738    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:42:18.038282    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:42:18.038296    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:42:18.050367    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:18.050378    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:18.076049    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:18.076066    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:42:18.076099    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:42:18.076112    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:18.076118    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:18.076123    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:18.076126    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:28.079921    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:33.081999    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:33.082267    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:33.104578    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:42:33.104691    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:33.120245    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:42:33.120316    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:33.149269    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:42:33.149347    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:33.164820    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:42:33.164889    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:33.175695    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:42:33.175758    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:33.185960    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:42:33.186021    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:33.195877    4497 logs.go:276] 0 containers: []
	W0729 10:42:33.195889    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:33.195949    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:33.206076    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:42:33.206093    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:42:33.206097    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:42:33.220581    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:42:33.220593    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:42:33.234440    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:42:33.234454    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:42:33.245801    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:42:33.245811    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:42:33.257446    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:42:33.257458    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:42:33.272914    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:33.272923    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:33.296877    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:33.296886    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:33.330542    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:42:33.330553    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:42:33.343833    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:42:33.343843    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:42:33.355660    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:42:33.355669    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:42:33.367157    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:42:33.367168    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:33.379369    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:33.379379    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:42:33.412768    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:33.412860    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:33.414283    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:33.414289    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:33.419562    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:42:33.419572    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:42:33.431113    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:42:33.431123    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:42:33.449980    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:33.449990    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:42:33.450019    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:42:33.450023    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:33.450026    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:33.450037    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:33.450042    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:43.453849    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:48.455923    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:48.456067    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:48.468133    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:42:48.468211    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:48.478538    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:42:48.478619    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:48.492264    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:42:48.492338    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:48.502595    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:42:48.502660    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:48.515581    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:42:48.515642    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:48.526236    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:42:48.526303    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:48.536085    4497 logs.go:276] 0 containers: []
	W0729 10:42:48.536099    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:48.536148    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:48.546687    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:42:48.546703    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:42:48.546710    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:42:48.560404    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:42:48.560418    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:42:48.572526    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:48.572539    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:42:48.606507    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:48.606598    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:48.607963    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:48.607968    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:48.612318    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:42:48.612327    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:42:48.623950    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:42:48.623961    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:42:48.639136    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:42:48.639149    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:42:48.652143    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:48.652153    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:48.677212    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:42:48.677222    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:48.688877    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:48.688891    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:48.723838    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:42:48.723850    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:42:48.736154    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:42:48.736168    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:42:48.753576    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:42:48.753589    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:42:48.767106    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:42:48.767120    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:42:48.781423    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:42:48.781437    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:42:48.796322    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:48.796336    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:42:48.796363    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:42:48.796368    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:48.796371    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:48.796377    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:48.796380    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:58.800186    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:03.802302    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:03.802447    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:03.817295    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:43:03.817372    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:03.828830    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:43:03.828898    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:03.839756    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:43:03.839818    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:03.850115    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:43:03.850184    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:03.860488    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:43:03.860557    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:03.871207    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:43:03.871271    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:03.883856    4497 logs.go:276] 0 containers: []
	W0729 10:43:03.883868    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:03.883920    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:03.894920    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:43:03.894935    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:03.894941    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:43:03.929562    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:43:03.929655    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:43:03.931050    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:43:03.931055    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:43:03.942676    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:43:03.942686    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:03.954706    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:03.954717    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:03.959165    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:43:03.959174    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:43:03.970802    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:43:03.970817    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:43:03.985159    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:03.985171    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:04.010807    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:04.010823    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:04.045505    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:43:04.045515    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:43:04.059818    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:43:04.059831    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:43:04.071379    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:43:04.071389    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:43:04.088750    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:43:04.088759    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:43:04.106304    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:43:04.106317    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:43:04.120348    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:43:04.120361    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:43:04.136688    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:43:04.136701    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:43:04.148556    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:04.148567    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:43:04.148595    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:43:04.148599    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:43:04.148603    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:43:04.148607    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:04.148609    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:43:14.152444    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:19.154600    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:19.154832    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:19.179722    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:43:19.179822    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:19.196370    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:43:19.196483    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:19.209695    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:43:19.209771    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:19.221005    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:43:19.221070    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:19.231692    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:43:19.231766    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:19.242340    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:43:19.242416    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:19.253483    4497 logs.go:276] 0 containers: []
	W0729 10:43:19.253497    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:19.253569    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:19.263969    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:43:19.263986    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:43:19.263991    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:43:19.275679    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:43:19.275689    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:43:19.290100    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:43:19.290111    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:43:19.301376    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:43:19.301387    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:43:19.319236    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:43:19.319248    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:43:19.337410    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:43:19.337421    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:43:19.351564    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:43:19.351574    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:43:19.365953    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:43:19.365965    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:43:19.377673    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:43:19.377685    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:43:19.389396    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:43:19.389406    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:19.401432    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:19.401443    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:19.405789    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:43:19.405795    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:43:19.416998    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:19.417011    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:43:19.449738    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:43:19.449838    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:43:19.451272    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:19.451280    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:19.506876    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:19.506887    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:19.530308    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:19.530317    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:43:19.530341    4497 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:43:19.530345    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:43:19.530348    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	  Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:43:19.530353    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:19.530373    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:43:29.534169    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:34.536309    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:34.541020    4497 out.go:177] 
	W0729 10:43:34.544066    4497 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 10:43:34.544071    4497 out.go:239] * 
	* 
	W0729 10:43:34.544540    4497 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:43:34.559767    4497 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-466000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 10:43:34.638647 -0700 PDT m=+2911.425843168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-466000 -n running-upgrade-466000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-466000 -n running-upgrade-466000: exit status 2 (15.567901667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-466000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-813000          | force-systemd-flag-813000 | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-193000              | force-systemd-env-193000  | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-193000           | force-systemd-env-193000  | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT | 29 Jul 24 10:33 PDT |
	| start   | -p docker-flags-083000                | docker-flags-083000       | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-813000             | force-systemd-flag-813000 | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-813000          | force-systemd-flag-813000 | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT | 29 Jul 24 10:33 PDT |
	| start   | -p cert-expiration-315000             | cert-expiration-315000    | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-083000 ssh               | docker-flags-083000       | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-083000 ssh               | docker-flags-083000       | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-083000                | docker-flags-083000       | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT | 29 Jul 24 10:33 PDT |
	| start   | -p cert-options-456000                | cert-options-456000       | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-456000 ssh               | cert-options-456000       | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-456000 -- sudo        | cert-options-456000       | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-456000                | cert-options-456000       | jenkins | v1.33.1 | 29 Jul 24 10:33 PDT | 29 Jul 24 10:33 PDT |
	| start   | -p running-upgrade-466000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 10:33 PDT | 29 Jul 24 10:35 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-466000             | running-upgrade-466000    | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-315000             | cert-expiration-315000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-315000             | cert-expiration-315000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	| start   | -p kubernetes-upgrade-436000          | kubernetes-upgrade-436000 | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-436000          | kubernetes-upgrade-436000 | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	| start   | -p kubernetes-upgrade-436000          | kubernetes-upgrade-436000 | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-436000          | kubernetes-upgrade-436000 | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	| start   | -p stopped-upgrade-396000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:37 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-396000 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 10:37 PDT | 29 Jul 24 10:37 PDT |
	| start   | -p stopped-upgrade-396000             | stopped-upgrade-396000    | jenkins | v1.33.1 | 29 Jul 24 10:37 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:37:48
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:37:48.639626    4671 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:37:48.639799    4671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:48.639803    4671 out.go:304] Setting ErrFile to fd 2...
	I0729 10:37:48.639806    4671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:48.639965    4671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:37:48.641213    4671 out.go:298] Setting JSON to false
	I0729 10:37:48.661172    4671 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4032,"bootTime":1722270636,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:37:48.661242    4671 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:37:48.665842    4671 out.go:177] * [stopped-upgrade-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:37:48.673797    4671 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:37:48.673849    4671 notify.go:220] Checking for updates...
	I0729 10:37:48.680790    4671 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:37:48.683802    4671 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:37:48.689764    4671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:37:48.693777    4671 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:37:48.696834    4671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:37:48.701015    4671 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:37:48.704790    4671 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 10:37:48.707804    4671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:37:48.711691    4671 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:37:48.718799    4671 start.go:297] selected driver: qemu2
	I0729 10:37:48.718806    4671 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:37:48.718849    4671 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:37:48.721450    4671 cni.go:84] Creating CNI manager for ""
	I0729 10:37:48.721515    4671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:37:48.721553    4671 start.go:340] cluster config:
	{Name:stopped-upgrade-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:37:48.721604    4671 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:37:48.729759    4671 out.go:177] * Starting "stopped-upgrade-396000" primary control-plane node in "stopped-upgrade-396000" cluster
	I0729 10:37:48.733688    4671 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:37:48.733704    4671 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 10:37:48.733715    4671 cache.go:56] Caching tarball of preloaded images
	I0729 10:37:48.733777    4671 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:37:48.733783    4671 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 10:37:48.733844    4671 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/config.json ...
	I0729 10:37:48.734274    4671 start.go:360] acquireMachinesLock for stopped-upgrade-396000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:37:48.734300    4671 start.go:364] duration metric: took 20.833µs to acquireMachinesLock for "stopped-upgrade-396000"
	I0729 10:37:48.734309    4671 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:37:48.734313    4671 fix.go:54] fixHost starting: 
	I0729 10:37:48.734417    4671 fix.go:112] recreateIfNeeded on stopped-upgrade-396000: state=Stopped err=<nil>
	W0729 10:37:48.734425    4671 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:37:48.742810    4671 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-396000" ...
	I0729 10:37:47.247871    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:47.247938    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:47.258934    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:47.259005    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:47.268953    4497 logs.go:276] 0 containers: []
	W0729 10:37:47.268964    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:47.269024    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:47.279381    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:47.279401    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:47.279406    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:47.305516    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:47.305530    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:47.319488    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:47.319498    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:47.356249    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:47.356257    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:47.374200    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:47.374209    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:47.385951    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:47.385961    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:47.397133    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:47.397143    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:47.422632    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:47.422640    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:47.427197    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:47.427206    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:47.441307    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:47.441317    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:47.459851    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:47.459865    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:47.475372    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:47.475400    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:47.497121    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:47.497132    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:47.509278    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:47.509290    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:47.545841    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:47.545853    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:47.558849    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:47.558863    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:47.571154    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:47.571166    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:50.085546    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:37:48.746776    4671 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:37:48.746836    4671 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50491-:22,hostfwd=tcp::50492-:2376,hostname=stopped-upgrade-396000 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/disk.qcow2
	I0729 10:37:48.794668    4671 main.go:141] libmachine: STDOUT: 
	I0729 10:37:48.794694    4671 main.go:141] libmachine: STDERR: 
	I0729 10:37:48.794700    4671 main.go:141] libmachine: Waiting for VM to start (ssh -p 50491 docker@127.0.0.1)...
	I0729 10:37:55.088013    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:37:55.088148    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:37:55.099785    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:37:55.099859    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:37:55.111074    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:37:55.111148    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:37:55.122076    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:37:55.122132    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:37:55.134301    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:37:55.134360    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:37:55.145691    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:37:55.145750    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:37:55.156362    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:37:55.156430    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:37:55.166864    4497 logs.go:276] 0 containers: []
	W0729 10:37:55.166877    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:37:55.166934    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:37:55.177702    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:37:55.177718    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:37:55.177723    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:37:55.182361    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:37:55.182367    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:37:55.194195    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:37:55.194205    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:37:55.206444    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:37:55.206456    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:37:55.244096    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:37:55.244106    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:37:55.257955    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:37:55.257968    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:37:55.283785    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:37:55.283795    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:37:55.295693    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:37:55.295708    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:37:55.307613    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:37:55.307627    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:37:55.331653    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:37:55.331660    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:37:55.367672    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:37:55.367683    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:37:55.382713    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:37:55.382724    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:37:55.402001    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:37:55.402012    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:37:55.413903    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:37:55.413914    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:37:55.429050    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:37:55.429063    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:37:55.440388    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:37:55.440400    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:37:55.457809    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:37:55.457820    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:37:57.974159    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:02.976215    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:02.976383    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:02.991959    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:02.992039    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:03.004050    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:03.004119    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:03.014153    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:03.014225    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:03.024531    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:03.024603    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:03.035350    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:03.035419    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:03.050217    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:03.050289    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:03.060354    4497 logs.go:276] 0 containers: []
	W0729 10:38:03.060370    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:03.060422    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:03.077507    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:03.077525    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:03.077530    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:03.092299    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:03.092311    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:03.103599    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:03.103609    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:03.118422    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:03.118433    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:03.129631    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:03.129643    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:03.154166    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:03.154177    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:03.168233    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:03.168246    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:03.202661    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:03.202668    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:03.216954    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:03.216966    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:03.234794    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:03.234806    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:03.246382    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:03.246396    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:03.257975    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:03.257987    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:03.292748    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:03.292761    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:03.315444    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:03.315456    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:03.339773    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:03.339782    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:03.351171    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:03.351183    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:03.355296    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:03.355304    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:05.872495    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:08.299537    4671 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/config.json ...
	I0729 10:38:08.300200    4671 machine.go:94] provisionDockerMachine start ...
	I0729 10:38:08.300378    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.300869    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.300883    4671 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:38:08.378824    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 10:38:08.378868    4671 buildroot.go:166] provisioning hostname "stopped-upgrade-396000"
	I0729 10:38:08.378986    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.379280    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.379295    4671 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-396000 && echo "stopped-upgrade-396000" | sudo tee /etc/hostname
	I0729 10:38:08.448739    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-396000
	
	I0729 10:38:08.448852    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.449036    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.449048    4671 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-396000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-396000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-396000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:38:08.504067    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:38:08.504080    4671 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19345-1151/.minikube CaCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19345-1151/.minikube}
	I0729 10:38:08.504090    4671 buildroot.go:174] setting up certificates
	I0729 10:38:08.504094    4671 provision.go:84] configureAuth start
	I0729 10:38:08.504103    4671 provision.go:143] copyHostCerts
	I0729 10:38:08.504175    4671 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem, removing ...
	I0729 10:38:08.504184    4671 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem
	I0729 10:38:08.504302    4671 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem (1082 bytes)
	I0729 10:38:08.504492    4671 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem, removing ...
	I0729 10:38:08.504497    4671 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem
	I0729 10:38:08.504552    4671 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem (1123 bytes)
	I0729 10:38:08.504665    4671 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem, removing ...
	I0729 10:38:08.504669    4671 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem
	I0729 10:38:08.504731    4671 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem (1675 bytes)
	I0729 10:38:08.504831    4671 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-396000 san=[127.0.0.1 localhost minikube stopped-upgrade-396000]
	I0729 10:38:08.608985    4671 provision.go:177] copyRemoteCerts
	I0729 10:38:08.609015    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:38:08.609023    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:38:08.638485    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:38:08.645321    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:38:08.651691    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 10:38:08.659435    4671 provision.go:87] duration metric: took 155.341ms to configureAuth
	I0729 10:38:08.659449    4671 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:38:08.659587    4671 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:38:08.659626    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.659724    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.659732    4671 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 10:38:08.712531    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 10:38:08.712544    4671 buildroot.go:70] root file system type: tmpfs
	I0729 10:38:08.712597    4671 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 10:38:08.712649    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.712769    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.712803    4671 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 10:38:08.766942    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 10:38:08.766988    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.767097    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.767111    4671 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 10:38:09.126686    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 10:38:09.126699    4671 machine.go:97] duration metric: took 826.528875ms to provisionDockerMachine
	I0729 10:38:09.126706    4671 start.go:293] postStartSetup for "stopped-upgrade-396000" (driver="qemu2")
	I0729 10:38:09.126713    4671 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:38:09.126782    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:38:09.126793    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:38:09.159039    4671 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:38:09.160391    4671 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 10:38:09.160399    4671 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19345-1151/.minikube/addons for local assets ...
	I0729 10:38:09.160477    4671 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19345-1151/.minikube/files for local assets ...
	I0729 10:38:09.160571    4671 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem -> 16482.pem in /etc/ssl/certs
	I0729 10:38:09.160663    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:38:09.163506    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem --> /etc/ssl/certs/16482.pem (1708 bytes)
	I0729 10:38:09.170528    4671 start.go:296] duration metric: took 43.818291ms for postStartSetup
	I0729 10:38:09.170541    4671 fix.go:56] duration metric: took 20.437199541s for fixHost
	I0729 10:38:09.170574    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:09.170705    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:09.170709    4671 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:38:09.220534    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722274689.681266712
	
	I0729 10:38:09.220540    4671 fix.go:216] guest clock: 1722274689.681266712
	I0729 10:38:09.220544    4671 fix.go:229] Guest: 2024-07-29 10:38:09.681266712 -0700 PDT Remote: 2024-07-29 10:38:09.170543 -0700 PDT m=+20.564879251 (delta=510.723712ms)
	I0729 10:38:09.220554    4671 fix.go:200] guest clock delta is within tolerance: 510.723712ms
	I0729 10:38:09.220556    4671 start.go:83] releasing machines lock for "stopped-upgrade-396000", held for 20.487226792s
	I0729 10:38:09.220608    4671 ssh_runner.go:195] Run: cat /version.json
	I0729 10:38:09.220610    4671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:38:09.220618    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:38:09.220630    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	W0729 10:38:09.351098    4671 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 10:38:09.351166    4671 ssh_runner.go:195] Run: systemctl --version
	I0729 10:38:09.353385    4671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:38:09.355262    4671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:38:09.355297    4671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 10:38:09.358720    4671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 10:38:09.366772    4671 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:38:09.366788    4671 start.go:495] detecting cgroup driver to use...
	I0729 10:38:09.366865    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:38:09.374110    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 10:38:09.378026    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 10:38:09.384437    4671 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 10:38:09.384495    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 10:38:09.387899    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:38:09.391131    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 10:38:09.394243    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:38:09.397287    4671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:38:09.400102    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 10:38:09.403510    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 10:38:09.406818    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 10:38:09.409644    4671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:38:09.412191    4671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:38:09.415027    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:09.496781    4671 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 10:38:09.507124    4671 start.go:495] detecting cgroup driver to use...
	I0729 10:38:09.507196    4671 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 10:38:09.512771    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:38:09.520040    4671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:38:09.535922    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:38:09.541035    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:38:09.545612    4671 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 10:38:09.573983    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:38:09.578431    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:38:09.583599    4671 ssh_runner.go:195] Run: which cri-dockerd
	I0729 10:38:09.585024    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 10:38:09.587842    4671 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 10:38:09.592916    4671 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 10:38:09.682117    4671 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 10:38:09.757540    4671 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 10:38:09.757622    4671 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 10:38:09.762823    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:09.843360    4671 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:38:11.021717    4671 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.178395s)
	I0729 10:38:11.021773    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 10:38:11.026456    4671 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 10:38:11.033187    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:38:11.038300    4671 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 10:38:11.124290    4671 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 10:38:11.208424    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:11.292415    4671 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 10:38:11.299014    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:38:11.304603    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:11.389340    4671 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 10:38:11.428773    4671 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 10:38:11.428852    4671 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 10:38:11.430947    4671 start.go:563] Will wait 60s for crictl version
	I0729 10:38:11.430999    4671 ssh_runner.go:195] Run: which crictl
	I0729 10:38:11.432420    4671 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:38:11.446566    4671 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 10:38:11.446636    4671 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:38:11.462936    4671 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:38:10.874859    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:10.875165    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:10.900951    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:10.901066    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:10.919380    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:10.919465    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:10.932713    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:10.932794    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:10.948640    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:10.948713    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:10.960423    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:10.960495    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:10.972056    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:10.972131    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:10.983236    4497 logs.go:276] 0 containers: []
	W0729 10:38:10.983250    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:10.983315    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:10.996022    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:10.996042    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:10.996048    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:11.008119    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:11.008132    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:11.033281    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:11.033292    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:11.070841    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:11.070853    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:11.085146    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:11.085165    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:11.096851    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:11.096865    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:11.117846    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:11.117858    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:11.136079    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:11.136093    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:11.179307    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:11.179319    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:11.193804    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:11.193816    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:11.220575    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:11.220584    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:11.232714    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:11.232726    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:11.244210    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:11.244220    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:11.257122    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:11.257135    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:11.262190    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:11.262198    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:11.275854    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:11.275865    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:11.287681    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:11.287693    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:11.487380    4671 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 10:38:11.487498    4671 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 10:38:11.488845    4671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:38:11.492323    4671 kubeadm.go:883] updating cluster {Name:stopped-upgrade-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 10:38:11.492367    4671 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:38:11.492413    4671 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:38:11.503214    4671 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:38:11.503230    4671 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:38:11.503275    4671 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:38:11.506601    4671 ssh_runner.go:195] Run: which lz4
	I0729 10:38:11.507959    4671 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 10:38:11.509368    4671 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:38:11.509380    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 10:38:12.416255    4671 docker.go:649] duration metric: took 908.368834ms to copy over tarball
	I0729 10:38:12.416312    4671 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:38:13.568858    4671 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152587667s)
	I0729 10:38:13.568872    4671 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:38:13.584484    4671 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:38:13.587662    4671 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 10:38:13.592833    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:13.802385    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:13.675305    4671 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:38:15.172156    4671 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.496905125s)
	I0729 10:38:15.172255    4671 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:38:15.184940    4671 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:38:15.184956    4671 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:38:15.184962    4671 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 10:38:15.189387    4671 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:15.191047    4671 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.193038    4671 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.193058    4671 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:15.194101    4671 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.194958    4671 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.196373    4671 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.196479    4671 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.197118    4671 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.197957    4671 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.198439    4671 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 10:38:15.199192    4671 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.199838    4671 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.199997    4671 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.200763    4671 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 10:38:15.201310    4671 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.571457    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.583917    4671 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 10:38:15.583949    4671 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.584001    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.594370    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 10:38:15.610244    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.620391    4671 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 10:38:15.620414    4671 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.620464    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.622745    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.632208    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 10:38:15.639611    4671 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 10:38:15.639627    4671 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.639678    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.641579    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.652844    4671 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 10:38:15.652865    4671 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.652918    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.653032    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 10:38:15.657790    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 10:38:15.663896    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 10:38:15.672321    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.673595    4671 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 10:38:15.673612    4671 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 10:38:15.673645    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 10:38:15.684650    4671 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 10:38:15.684664    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 10:38:15.684668    4671 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.684708    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.684773    4671 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 10:38:15.687284    4671 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 10:38:15.687303    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 10:38:15.696568    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 10:38:15.696683    4671 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:38:15.698382    4671 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 10:38:15.698403    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 10:38:15.707147    4671 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 10:38:15.707158    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0729 10:38:15.721579    4671 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 10:38:15.721712    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.773009    4671 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 10:38:15.773069    4671 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 10:38:15.773091    4671 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.773160    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.807243    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 10:38:15.807384    4671 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:38:15.820760    4671 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 10:38:15.820793    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 10:38:15.907673    4671 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:38:15.907689    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 10:38:15.981613    4671 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 10:38:16.019693    4671 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:38:16.019708    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0729 10:38:16.035310    4671 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 10:38:16.035427    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:16.156972    4671 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 10:38:16.157016    4671 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 10:38:16.157036    4671 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:16.157091    4671 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:16.173408    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 10:38:16.173541    4671 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:38:16.174984    4671 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 10:38:16.174999    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 10:38:16.205172    4671 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:38:16.205188    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 10:38:16.434269    4671 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 10:38:16.434307    4671 cache_images.go:92] duration metric: took 1.249390875s to LoadCachedImages
	W0729 10:38:16.434354    4671 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 10:38:16.434361    4671 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 10:38:16.434419    4671 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-396000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:38:16.434481    4671 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 10:38:16.453691    4671 cni.go:84] Creating CNI manager for ""
	I0729 10:38:16.453702    4671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:38:16.453707    4671 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:38:16.453715    4671 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-396000 NodeName:stopped-upgrade-396000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:38:16.453780    4671 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-396000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:38:16.453841    4671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 10:38:16.456624    4671 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:38:16.456658    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:38:16.459496    4671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 10:38:16.464372    4671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:38:16.469170    4671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 10:38:16.474471    4671 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 10:38:16.475654    4671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:38:16.479521    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:16.558319    4671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:38:16.563461    4671 certs.go:68] Setting up /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000 for IP: 10.0.2.15
	I0729 10:38:16.563470    4671 certs.go:194] generating shared ca certs ...
	I0729 10:38:16.563479    4671 certs.go:226] acquiring lock for ca certs: {Name:mk28bd7d778d1316d2729251af42b84d93001f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:16.563645    4671 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.key
	I0729 10:38:16.563689    4671 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.key
	I0729 10:38:16.563699    4671 certs.go:256] generating profile certs ...
	I0729 10:38:16.563762    4671 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.key
	I0729 10:38:16.563777    4671 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key.3a53ef7d
	I0729 10:38:16.563786    4671 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt.3a53ef7d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 10:38:16.697532    4671 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt.3a53ef7d ...
	I0729 10:38:16.697547    4671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt.3a53ef7d: {Name:mkf8c8827c3bf4e8c67713a9eecd11bc6940bf81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:16.699374    4671 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key.3a53ef7d ...
	I0729 10:38:16.699381    4671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key.3a53ef7d: {Name:mk081eaa9df64d2852d9436fbb1765eef30ee189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:16.699537    4671 certs.go:381] copying /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt.3a53ef7d -> /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt
	I0729 10:38:16.699874    4671 certs.go:385] copying /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key.3a53ef7d -> /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key
	I0729 10:38:16.700029    4671 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/proxy-client.key
	I0729 10:38:16.700168    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648.pem (1338 bytes)
	W0729 10:38:16.700190    4671 certs.go:480] ignoring /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648_empty.pem, impossibly tiny 0 bytes
	I0729 10:38:16.700196    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:38:16.700214    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:38:16.700232    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:38:16.700250    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem (1675 bytes)
	I0729 10:38:16.700287    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem (1708 bytes)
	I0729 10:38:16.700644    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:38:16.707890    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 10:38:16.714920    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:38:16.722249    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:38:16.729094    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 10:38:16.735626    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 10:38:16.742868    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:38:16.750138    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:38:16.757103    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:38:16.763574    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648.pem --> /usr/share/ca-certificates/1648.pem (1338 bytes)
	I0729 10:38:16.770899    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem --> /usr/share/ca-certificates/16482.pem (1708 bytes)
	I0729 10:38:16.777883    4671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:38:16.783169    4671 ssh_runner.go:195] Run: openssl version
	I0729 10:38:16.785203    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16482.pem && ln -fs /usr/share/ca-certificates/16482.pem /etc/ssl/certs/16482.pem"
	I0729 10:38:16.788320    4671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16482.pem
	I0729 10:38:16.789884    4671 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:03 /usr/share/ca-certificates/16482.pem
	I0729 10:38:16.789908    4671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16482.pem
	I0729 10:38:16.791732    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16482.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:38:16.794987    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:38:16.798266    4671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:38:16.799735    4671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:38:16.799760    4671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:38:16.801622    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:38:16.804520    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1648.pem && ln -fs /usr/share/ca-certificates/1648.pem /etc/ssl/certs/1648.pem"
	I0729 10:38:16.807670    4671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1648.pem
	I0729 10:38:16.809167    4671 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:03 /usr/share/ca-certificates/1648.pem
	I0729 10:38:16.809184    4671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1648.pem
	I0729 10:38:16.810906    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1648.pem /etc/ssl/certs/51391683.0"
	I0729 10:38:16.813820    4671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:38:16.815238    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 10:38:16.818126    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 10:38:16.819949    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 10:38:16.822336    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 10:38:16.824148    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 10:38:16.825934    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 10:38:16.827743    4671 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:38:16.827811    4671 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:38:16.837453    4671 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:38:16.840544    4671 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 10:38:16.840550    4671 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 10:38:16.840570    4671 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 10:38:16.843813    4671 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:38:16.844102    4671 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-396000" does not appear in /Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:38:16.844208    4671 kubeconfig.go:62] /Users/jenkins/minikube-integration/19345-1151/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-396000" cluster setting kubeconfig missing "stopped-upgrade-396000" context setting]
	I0729 10:38:16.844410    4671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/kubeconfig: {Name:mk69e1ff39ac907f2664a3f00c50d678e5bdc356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:16.844825    4671 kapi.go:59] client config for stopped-upgrade-396000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.key", CAFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044f80c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:38:16.845156    4671 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 10:38:16.847823    4671 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-396000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 10:38:16.847829    4671 kubeadm.go:1160] stopping kube-system containers ...
	I0729 10:38:16.847870    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:38:16.858120    4671 docker.go:483] Stopping containers: [3a751030b0c1 06ee739538c0 911773d2a582 d85e93f01c88 8932880f9d0a 52cd23a2afc6 ca2e80c87719 9754f11c265c]
	I0729 10:38:16.858187    4671 ssh_runner.go:195] Run: docker stop 3a751030b0c1 06ee739538c0 911773d2a582 d85e93f01c88 8932880f9d0a 52cd23a2afc6 ca2e80c87719 9754f11c265c
	I0729 10:38:16.868520    4671 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 10:38:16.874350    4671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:38:16.876940    4671 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:38:16.876950    4671 kubeadm.go:157] found existing configuration files:
	
	I0729 10:38:16.876970    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf
	I0729 10:38:16.879611    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:38:16.879628    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:38:16.882614    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf
	I0729 10:38:16.885214    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:38:16.885233    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:38:16.887790    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf
	I0729 10:38:16.890516    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:38:16.890538    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:38:16.893097    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf
	I0729 10:38:16.895596    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:38:16.895617    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:38:16.898656    4671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:38:16.901554    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:16.923344    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:17.209238    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:17.343905    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:17.365835    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:17.390884    4671 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:38:17.390964    4671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:38:17.893059    4671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:38:18.393018    4671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:38:18.397176    4671 api_server.go:72] duration metric: took 1.006342083s to wait for apiserver process to appear ...
	I0729 10:38:18.397186    4671 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:38:18.397194    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:18.804362    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:18.804523    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:18.815660    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:18.815727    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:18.826772    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:18.826831    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:18.837496    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:18.837559    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:18.854502    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:18.854574    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:18.869842    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:18.869905    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:18.881259    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:18.881325    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:18.895484    4497 logs.go:276] 0 containers: []
	W0729 10:38:18.895494    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:18.895545    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:18.906735    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:18.906751    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:18.906771    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:18.911193    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:18.911200    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:18.922959    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:18.922969    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:18.935516    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:18.935527    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:18.951356    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:18.951373    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:18.963572    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:18.963583    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:18.975383    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:18.975394    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:18.993559    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:18.993571    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:19.040399    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:19.040410    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:19.055082    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:19.055097    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:19.083965    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:19.083995    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:19.096534    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:19.096548    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:19.121924    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:19.121944    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:19.134204    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:19.134217    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:19.172503    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:19.172527    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:19.188811    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:19.188826    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:19.205237    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:19.205250    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:21.725209    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:23.399212    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:23.399289    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:26.727306    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:26.727644    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:26.762125    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:26.762229    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:26.780063    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:26.780143    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:26.793560    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:26.793638    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:26.805269    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:26.805338    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:26.815912    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:26.815975    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:26.826059    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:26.826127    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:26.836156    4497 logs.go:276] 0 containers: []
	W0729 10:38:26.836169    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:26.836226    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:26.846486    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:26.846503    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:26.846508    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:26.857488    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:26.857499    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:26.893507    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:26.893519    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:26.907145    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:26.907158    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:26.932381    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:26.932392    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:26.944192    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:26.944202    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:26.956368    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:26.956380    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:26.969534    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:26.969546    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:26.993183    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:26.993197    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:27.031300    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:27.031313    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:27.035568    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:27.035575    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:27.046903    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:27.046916    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:27.062371    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:27.062382    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:27.084122    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:27.084134    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:27.095830    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:27.095841    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:27.109504    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:27.109514    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:27.124623    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:27.124634    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:28.399764    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:28.399844    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:29.638162    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:33.400503    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:33.400521    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:34.640218    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:34.640329    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:34.651315    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:34.651384    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:34.662279    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:34.662347    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:34.672530    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:34.672600    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:34.687285    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:34.687357    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:34.697909    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:34.697975    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:34.717805    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:34.717874    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:34.728962    4497 logs.go:276] 0 containers: []
	W0729 10:38:34.728974    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:34.729033    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:34.739923    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:34.739948    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:34.739954    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:34.754785    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:34.754800    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:34.766692    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:34.766703    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:34.792915    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:34.792928    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:34.805459    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:34.805470    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:34.828056    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:34.828067    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:34.845310    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:34.845320    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:34.860966    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:34.860977    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:34.896625    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:34.896640    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:34.922619    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:34.922632    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:34.934669    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:34.934681    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:34.946778    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:34.946790    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:34.951261    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:34.951270    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:34.988797    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:34.988811    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:35.006505    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:35.006515    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:35.017775    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:35.017788    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:35.031179    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:35.031191    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:38.401034    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:38.401127    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:37.544879    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:43.402170    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:43.402230    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:42.545922    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:42.546195    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:42.572751    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:42.572871    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:42.590165    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:42.590246    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:42.603054    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:42.603125    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:42.614783    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:42.614856    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:42.629636    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:42.629709    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:42.640944    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:42.641022    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:42.651350    4497 logs.go:276] 0 containers: []
	W0729 10:38:42.651361    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:42.651418    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:42.662008    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:42.662026    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:42.662031    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:42.676351    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:42.676362    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:42.691020    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:42.691029    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:42.708663    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:42.708673    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:42.732465    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:42.732475    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:42.769599    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:42.769608    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:42.822652    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:42.822666    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:42.841136    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:42.841151    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:42.857689    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:42.857704    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:42.869284    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:42.869297    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:42.880978    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:42.880991    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:42.892532    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:42.892543    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:42.904369    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:42.904382    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:42.918407    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:42.918417    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:42.933576    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:42.933584    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:42.945138    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:42.945149    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:42.949501    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:42.949508    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:45.477351    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:48.402877    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:48.402947    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:50.479488    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:50.479811    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:50.503601    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:50.503701    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:50.519109    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:50.519192    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:50.531684    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:50.531760    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:50.542488    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:50.542559    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:50.552931    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:50.553001    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:50.563622    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:50.563694    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:50.574800    4497 logs.go:276] 0 containers: []
	W0729 10:38:50.574813    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:50.574871    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:50.585080    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:50.585096    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:50.585102    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:50.599033    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:50.599043    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:38:50.616608    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:50.616619    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:50.628866    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:50.628878    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:50.640290    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:50.640302    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:50.676320    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:50.676330    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:50.690847    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:50.690860    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:50.704081    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:50.704093    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:50.720855    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:50.720867    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:50.732322    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:50.732333    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:50.744231    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:50.744241    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:50.755396    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:50.755406    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:50.773138    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:50.773151    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:50.796845    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:50.796852    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:50.808492    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:50.808503    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:50.812702    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:50.812709    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:50.846783    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:50.846795    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:53.404717    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:53.404790    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:53.378085    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:58.406418    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:58.406471    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:58.380691    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:58.381187    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:38:58.418210    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:38:58.418336    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:38:58.439812    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:38:58.439907    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:38:58.454739    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:38:58.454819    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:38:58.469525    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:38:58.469605    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:38:58.480190    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:38:58.480260    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:38:58.490863    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:38:58.490932    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:38:58.501732    4497 logs.go:276] 0 containers: []
	W0729 10:38:58.501741    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:38:58.501795    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:38:58.513573    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:38:58.513590    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:38:58.513596    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:38:58.539414    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:38:58.539425    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:38:58.554311    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:38:58.554321    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:38:58.569198    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:38:58.569212    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:38:58.581414    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:38:58.581425    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:38:58.593782    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:38:58.593794    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:38:58.631782    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:38:58.631793    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:38:58.643735    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:38:58.643746    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:38:58.655913    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:38:58.655923    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:38:58.667655    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:38:58.667665    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:38:58.672032    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:38:58.672040    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:38:58.687401    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:38:58.687412    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:38:58.699531    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:38:58.699542    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:38:58.723342    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:38:58.723351    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:38:58.759876    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:38:58.759885    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:38:58.773671    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:38:58.773683    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:38:58.787310    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:38:58.787322    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:39:01.306807    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:03.408662    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:03.408734    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:06.309109    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:06.309283    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:06.320951    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:39:06.321024    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:06.332057    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:39:06.332127    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:06.342763    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:39:06.342834    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:06.353262    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:39:06.353328    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:06.363739    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:39:06.363804    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:06.374311    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:39:06.374384    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:06.384260    4497 logs.go:276] 0 containers: []
	W0729 10:39:06.384271    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:06.384328    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:06.394714    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:39:06.394730    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:39:06.394736    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:39:06.412003    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:06.412014    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:06.435729    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:39:06.435737    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:39:06.450139    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:39:06.450149    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:39:06.461683    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:39:06.461695    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:39:06.473999    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:39:06.474011    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:06.486272    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:39:06.486284    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:39:06.511446    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:39:06.511456    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:39:06.525732    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:39:06.525743    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:39:06.538317    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:39:06.538327    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:39:06.549566    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:39:06.549578    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:39:06.560821    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:39:06.560833    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:39:06.575845    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:39:06.575856    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:39:06.587330    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:06.587341    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:06.624100    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:06.624108    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:06.628128    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:06.628134    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:06.661856    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:39:06.661867    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:39:08.410948    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:08.410997    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:09.177066    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:13.412558    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:13.412636    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:14.179088    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:14.179197    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:14.192463    4497 logs.go:276] 2 containers: [742c989dfbd6 217ddd7b537f]
	I0729 10:39:14.192537    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:14.204152    4497 logs.go:276] 2 containers: [de54c4d9508a c647ccee48a0]
	I0729 10:39:14.204233    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:14.215258    4497 logs.go:276] 1 containers: [581cf4550f7a]
	I0729 10:39:14.215332    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:14.226795    4497 logs.go:276] 2 containers: [76259c7cab0c 27436bba39ee]
	I0729 10:39:14.226870    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:14.237095    4497 logs.go:276] 1 containers: [c95fb07deedb]
	I0729 10:39:14.237163    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:14.247699    4497 logs.go:276] 2 containers: [4e1641426a03 a8b8304d388d]
	I0729 10:39:14.247770    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:14.258251    4497 logs.go:276] 0 containers: []
	W0729 10:39:14.258264    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:14.258325    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:14.268871    4497 logs.go:276] 2 containers: [411d7d1d5d46 945f7dd41808]
	I0729 10:39:14.268893    4497 logs.go:123] Gathering logs for storage-provisioner [411d7d1d5d46] ...
	I0729 10:39:14.268900    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 411d7d1d5d46"
	I0729 10:39:14.280113    4497 logs.go:123] Gathering logs for kube-apiserver [217ddd7b537f] ...
	I0729 10:39:14.280127    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 217ddd7b537f"
	I0729 10:39:14.305596    4497 logs.go:123] Gathering logs for kube-apiserver [742c989dfbd6] ...
	I0729 10:39:14.305607    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 742c989dfbd6"
	I0729 10:39:14.323383    4497 logs.go:123] Gathering logs for etcd [c647ccee48a0] ...
	I0729 10:39:14.323396    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c647ccee48a0"
	I0729 10:39:14.339656    4497 logs.go:123] Gathering logs for coredns [581cf4550f7a] ...
	I0729 10:39:14.339667    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581cf4550f7a"
	I0729 10:39:14.353671    4497 logs.go:123] Gathering logs for kube-controller-manager [4e1641426a03] ...
	I0729 10:39:14.353683    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e1641426a03"
	I0729 10:39:14.370777    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:14.370789    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:14.405968    4497 logs.go:123] Gathering logs for kube-scheduler [27436bba39ee] ...
	I0729 10:39:14.405982    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27436bba39ee"
	I0729 10:39:14.421444    4497 logs.go:123] Gathering logs for storage-provisioner [945f7dd41808] ...
	I0729 10:39:14.421454    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945f7dd41808"
	I0729 10:39:14.434536    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:39:14.434548    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:14.446956    4497 logs.go:123] Gathering logs for kube-scheduler [76259c7cab0c] ...
	I0729 10:39:14.446967    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76259c7cab0c"
	I0729 10:39:14.458876    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:14.458887    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:14.463420    4497 logs.go:123] Gathering logs for etcd [de54c4d9508a] ...
	I0729 10:39:14.463429    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de54c4d9508a"
	I0729 10:39:14.482780    4497 logs.go:123] Gathering logs for kube-proxy [c95fb07deedb] ...
	I0729 10:39:14.482792    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c95fb07deedb"
	I0729 10:39:14.494681    4497 logs.go:123] Gathering logs for kube-controller-manager [a8b8304d388d] ...
	I0729 10:39:14.494691    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b8304d388d"
	I0729 10:39:14.508343    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:14.508356    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:14.531645    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:14.531656    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:17.069410    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:18.414873    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:18.415066    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:18.431809    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:18.431936    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:18.444378    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:18.444443    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:18.455559    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:18.455644    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:18.466476    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:18.466558    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:18.482928    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:18.482994    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:18.493234    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:18.493293    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:18.503615    4671 logs.go:276] 0 containers: []
	W0729 10:39:18.503626    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:18.503681    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:18.518289    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:18.518318    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:18.518327    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:18.627725    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:18.627738    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:22.071504    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:22.071596    4497 kubeadm.go:597] duration metric: took 4m4.481038667s to restartPrimaryControlPlane
	W0729 10:39:22.071671    4497 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 10:39:22.071701    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 10:39:23.041381    4497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:39:23.046361    4497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:39:23.049407    4497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:39:23.052171    4497 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:39:23.052177    4497 kubeadm.go:157] found existing configuration files:
	
	I0729 10:39:23.052199    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/admin.conf
	I0729 10:39:23.054753    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:39:23.054773    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:39:23.057866    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/kubelet.conf
	I0729 10:39:23.061208    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:39:23.061245    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:39:23.063995    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/controller-manager.conf
	I0729 10:39:23.066441    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:39:23.066461    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:39:23.069111    4497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/scheduler.conf
	I0729 10:39:23.071877    4497 kubeadm.go:163] "https://control-plane.minikube.internal:50308" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50308 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:39:23.071897    4497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:39:23.074366    4497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:39:23.091387    4497 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 10:39:23.091490    4497 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:39:23.144003    4497 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:39:23.144065    4497 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:39:23.144147    4497 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:39:23.195164    4497 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:39:18.641975    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:18.641988    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:18.653897    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:18.653910    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:18.665613    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:18.665625    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:18.681424    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:18.681435    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:18.692992    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:18.693008    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:18.707043    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:18.707054    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:18.719171    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:18.719183    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:18.723419    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:18.723425    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:18.738571    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:18.738581    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:18.750333    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:18.750343    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:18.775956    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:18.775965    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:18.815190    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:18.815200    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:18.828652    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:18.828665    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:18.856404    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:18.856415    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:18.871745    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:18.871755    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:21.390924    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:23.200330    4497 out.go:204]   - Generating certificates and keys ...
	I0729 10:39:23.200365    4497 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:39:23.200456    4497 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:39:23.200490    4497 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 10:39:23.200514    4497 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 10:39:23.200565    4497 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 10:39:23.200597    4497 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 10:39:23.200638    4497 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 10:39:23.200689    4497 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 10:39:23.200754    4497 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 10:39:23.200897    4497 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 10:39:23.201089    4497 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 10:39:23.201123    4497 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:39:23.560198    4497 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:39:23.728242    4497 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:39:23.771253    4497 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:39:23.883203    4497 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:39:23.913069    4497 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:39:23.913453    4497 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:39:23.913477    4497 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:39:23.999946    4497 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:39:24.003916    4497 out.go:204]   - Booting up control plane ...
	I0729 10:39:24.003968    4497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:39:24.004012    4497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:39:24.004062    4497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:39:24.004102    4497 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:39:24.004269    4497 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 10:39:26.393052    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:26.393161    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:26.404649    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:26.404723    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:26.416082    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:26.416151    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:26.427816    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:26.427897    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:26.439181    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:26.439320    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:26.450454    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:26.450515    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:26.468114    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:26.468181    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:26.479173    4671 logs.go:276] 0 containers: []
	W0729 10:39:26.479183    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:26.479243    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:26.494085    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:26.494100    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:26.494105    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:26.533340    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:26.533357    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:26.549020    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:26.549042    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:26.562103    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:26.562116    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:26.576738    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:26.576754    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:26.588898    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:26.588911    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:26.601525    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:26.601537    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:26.627920    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:26.627934    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:26.667745    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:26.667761    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:26.693918    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:26.693936    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:26.709764    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:26.709774    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:26.728994    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:26.729005    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:26.747029    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:26.747041    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:26.751836    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:26.751842    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:26.764433    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:26.764444    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:26.776822    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:26.776834    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:26.794028    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:26.794050    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:28.501590    4497 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502104 seconds
	I0729 10:39:28.501733    4497 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:39:28.505893    4497 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:39:29.026617    4497 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:39:29.027017    4497 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-466000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:39:29.529954    4497 kubeadm.go:310] [bootstrap-token] Using token: y3iu0w.0sj8j61agh78ao9n
	I0729 10:39:29.536412    4497 out.go:204]   - Configuring RBAC rules ...
	I0729 10:39:29.536474    4497 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:39:29.536526    4497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:39:29.539975    4497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:39:29.540824    4497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:39:29.541596    4497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:39:29.542483    4497 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:39:29.545688    4497 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:39:29.732347    4497 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:39:29.934055    4497 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:39:29.934487    4497 kubeadm.go:310] 
	I0729 10:39:29.934526    4497 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:39:29.934532    4497 kubeadm.go:310] 
	I0729 10:39:29.934566    4497 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:39:29.934570    4497 kubeadm.go:310] 
	I0729 10:39:29.934581    4497 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:39:29.934608    4497 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:39:29.934640    4497 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:39:29.934648    4497 kubeadm.go:310] 
	I0729 10:39:29.934676    4497 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:39:29.934679    4497 kubeadm.go:310] 
	I0729 10:39:29.934709    4497 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:39:29.934713    4497 kubeadm.go:310] 
	I0729 10:39:29.934745    4497 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:39:29.934782    4497 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:39:29.934835    4497 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:39:29.934840    4497 kubeadm.go:310] 
	I0729 10:39:29.934896    4497 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:39:29.934938    4497 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:39:29.934941    4497 kubeadm.go:310] 
	I0729 10:39:29.934991    4497 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y3iu0w.0sj8j61agh78ao9n \
	I0729 10:39:29.935043    4497 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e543544bbdf55d58d5e8ecb84a321dadc33a389aefb88a9b79f2e5e89d2eeaba \
	I0729 10:39:29.935060    4497 kubeadm.go:310] 	--control-plane 
	I0729 10:39:29.935063    4497 kubeadm.go:310] 
	I0729 10:39:29.935109    4497 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:39:29.935112    4497 kubeadm.go:310] 
	I0729 10:39:29.935160    4497 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y3iu0w.0sj8j61agh78ao9n \
	I0729 10:39:29.935216    4497 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e543544bbdf55d58d5e8ecb84a321dadc33a389aefb88a9b79f2e5e89d2eeaba 
	I0729 10:39:29.935725    4497 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:39:29.935783    4497 cni.go:84] Creating CNI manager for ""
	I0729 10:39:29.935792    4497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:39:29.940058    4497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:39:29.943845    4497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:39:29.946628    4497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:39:29.951251    4497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:39:29.951292    4497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:39:29.951322    4497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-466000 minikube.k8s.io/updated_at=2024_07_29T10_39_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=running-upgrade-466000 minikube.k8s.io/primary=true
	I0729 10:39:29.982662    4497 kubeadm.go:1113] duration metric: took 31.40275ms to wait for elevateKubeSystemPrivileges
	I0729 10:39:29.982702    4497 ops.go:34] apiserver oom_adj: -16
	I0729 10:39:30.000099    4497 kubeadm.go:394] duration metric: took 4m12.425660375s to StartCluster
	I0729 10:39:30.000118    4497 settings.go:142] acquiring lock: {Name:mk00a8a4362ef98c344b6c02e7313691374680e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:30.000205    4497 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:39:30.000590    4497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/kubeconfig: {Name:mk69e1ff39ac907f2664a3f00c50d678e5bdc356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:30.000796    4497 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:39:30.000808    4497 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:39:30.000840    4497 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-466000"
	I0729 10:39:30.000864    4497 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-466000"
	I0729 10:39:30.000867    4497 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-466000"
	W0729 10:39:30.000869    4497 addons.go:243] addon storage-provisioner should already be in state true
	I0729 10:39:30.000875    4497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-466000"
	I0729 10:39:30.000878    4497 config.go:182] Loaded profile config "running-upgrade-466000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:39:30.000881    4497 host.go:66] Checking if "running-upgrade-466000" exists ...
	I0729 10:39:30.001790    4497 kapi.go:59] client config for running-upgrade-466000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/running-upgrade-466000/client.key", CAFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027180c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:39:30.001909    4497 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-466000"
	W0729 10:39:30.001913    4497 addons.go:243] addon default-storageclass should already be in state true
	I0729 10:39:30.001920    4497 host.go:66] Checking if "running-upgrade-466000" exists ...
	I0729 10:39:30.004892    4497 out.go:177] * Verifying Kubernetes components...
	I0729 10:39:30.005261    4497 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:39:30.009209    4497 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:39:30.009216    4497 sshutil.go:53] new ssh client: &{IP:localhost Port:50276 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/running-upgrade-466000/id_rsa Username:docker}
	I0729 10:39:30.012788    4497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:39:30.016833    4497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:39:30.022829    4497 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:39:30.022837    4497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:39:30.022844    4497 sshutil.go:53] new ssh client: &{IP:localhost Port:50276 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/running-upgrade-466000/id_rsa Username:docker}
	I0729 10:39:30.107398    4497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:39:30.112657    4497 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:39:30.112705    4497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:39:30.116514    4497 api_server.go:72] duration metric: took 115.713625ms to wait for apiserver process to appear ...
	I0729 10:39:30.116523    4497 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:39:30.116529    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:30.130753    4497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:39:30.190164    4497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:39:29.308607    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:35.117970    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:35.117993    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:34.310714    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:34.310822    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:34.321898    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:34.321981    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:34.337255    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:34.337327    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:34.348219    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:34.348287    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:34.358621    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:34.358702    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:34.369191    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:34.369263    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:34.379826    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:34.379896    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:34.390105    4671 logs.go:276] 0 containers: []
	W0729 10:39:34.390115    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:34.390176    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:34.400819    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:34.400836    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:34.400842    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:34.438963    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:34.438977    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:34.463761    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:34.463772    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:34.481445    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:34.481454    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:34.485325    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:34.485332    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:34.500750    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:34.500760    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:34.514840    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:34.514850    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:34.526615    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:34.526625    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:34.540889    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:34.540898    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:34.579358    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:34.579365    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:34.590135    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:34.590149    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:34.604992    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:34.605002    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:34.618695    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:34.618706    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:34.644381    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:34.644388    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:34.658381    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:34.658390    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:34.669649    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:34.669660    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:34.687519    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:34.687533    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:37.201210    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:40.118120    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:40.118149    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:42.203270    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:42.203356    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:42.214759    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:42.214830    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:42.227436    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:42.227509    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:42.238353    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:42.238426    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:42.250243    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:42.250312    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:42.261160    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:42.261241    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:42.273785    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:42.273859    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:42.284740    4671 logs.go:276] 0 containers: []
	W0729 10:39:42.284754    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:42.284817    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:42.295948    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:42.295966    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:42.295972    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:42.300254    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:42.300261    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:42.315858    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:42.315868    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:42.332783    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:42.332794    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:42.351675    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:42.351688    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:42.368392    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:42.368406    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:42.379493    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:42.379503    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:42.406208    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:42.406237    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:42.419377    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:42.419390    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:42.433305    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:42.433316    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:42.459053    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:42.459064    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:42.470973    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:42.470984    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:42.485522    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:42.485537    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:42.503562    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:42.503575    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:42.514804    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:42.514816    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:42.554378    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:42.554391    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:42.590604    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:42.590619    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:45.118130    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:45.118157    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:45.104963    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:50.118383    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:50.118463    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:50.105820    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:50.105983    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:50.121357    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:50.121431    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:50.143668    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:50.143736    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:50.156961    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:50.157024    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:50.168156    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:50.168229    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:50.178689    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:50.178754    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:50.189175    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:50.189244    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:50.198739    4671 logs.go:276] 0 containers: []
	W0729 10:39:50.198750    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:50.198803    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:50.209624    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:50.209643    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:50.209649    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:50.214232    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:50.214238    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:50.234124    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:50.234134    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:50.263084    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:50.263094    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:50.277006    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:50.277020    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:50.288822    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:50.288832    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:50.302409    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:50.302419    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:50.314136    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:50.314151    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:50.331180    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:50.331193    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:50.342235    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:50.342245    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:50.381897    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:50.381910    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:50.398976    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:50.398988    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:50.416871    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:50.416883    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:50.428584    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:50.428598    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:50.453223    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:50.453235    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:50.489321    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:50.489332    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:50.501256    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:50.501267    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:53.014232    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:55.118656    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:55.118701    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:58.016270    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:58.016434    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:58.031439    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:58.031527    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:58.043128    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:58.043194    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:58.053377    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:58.053445    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:58.068139    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:58.068208    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:58.080668    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:58.080734    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:58.095717    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:58.095775    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:58.106266    4671 logs.go:276] 0 containers: []
	W0729 10:39:58.106279    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:58.106339    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:58.116440    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:58.116461    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:58.116467    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:58.141411    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:58.141421    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:58.155725    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:58.155736    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:58.167808    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:58.167818    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:58.205245    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:58.205256    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:58.240554    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:58.240568    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:58.252389    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:58.252401    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:58.270581    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:58.270592    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:58.295482    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:58.295489    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:58.299611    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:58.299619    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:58.313815    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:58.313827    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:58.325167    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:58.325178    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:58.337053    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:58.337063    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:58.350840    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:58.350854    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:58.367086    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:58.367096    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:58.384382    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:58.384403    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:58.396522    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:58.396532    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:00.119399    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:00.119427    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 10:40:00.481117    4497 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 10:40:00.484757    4497 out.go:177] * Enabled addons: storage-provisioner
	I0729 10:40:00.496696    4497 addons.go:510] duration metric: took 30.4973465s for enable addons: enabled=[storage-provisioner]
	I0729 10:40:00.921588    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:05.120029    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:05.120071    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:05.923351    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:05.923542    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:05.943635    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:05.943736    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:05.958287    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:05.958366    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:05.970626    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:05.970695    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:05.981244    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:05.981312    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:05.993073    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:05.993147    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:06.006276    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:06.006363    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:06.017674    4671 logs.go:276] 0 containers: []
	W0729 10:40:06.017689    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:06.017755    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:06.028711    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:06.028729    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:06.028734    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:06.068621    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:06.068631    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:06.083143    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:06.083160    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:06.095539    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:06.095554    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:06.107608    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:06.107620    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:06.122207    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:06.122217    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:06.133624    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:06.133635    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:06.152392    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:06.152404    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:06.166683    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:06.166693    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:06.177883    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:06.177893    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:06.203525    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:06.203534    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:06.207671    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:06.207676    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:06.241874    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:06.241886    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:06.267260    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:06.267271    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:06.281816    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:06.281825    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:06.293913    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:06.293926    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:06.308994    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:06.309005    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:10.120957    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:10.121009    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:08.823021    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:15.122234    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:15.122296    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:13.825156    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:13.825463    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:13.849420    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:13.849540    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:13.865428    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:13.865505    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:13.878086    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:13.878163    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:13.889426    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:13.889497    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:13.902355    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:13.902421    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:13.913586    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:13.913662    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:13.923854    4671 logs.go:276] 0 containers: []
	W0729 10:40:13.923865    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:13.923920    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:13.934053    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:13.934071    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:13.934076    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:13.948011    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:13.948022    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:13.961946    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:13.961956    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:13.976382    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:13.976393    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:13.990759    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:13.990770    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:14.015529    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:14.015538    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:14.052410    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:14.052423    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:14.064005    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:14.064017    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:14.080908    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:14.080919    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:14.098419    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:14.098429    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:14.109770    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:14.109781    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:14.121293    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:14.121304    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:14.159176    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:14.159187    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:14.163405    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:14.163414    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:14.188198    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:14.188211    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:14.199767    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:14.199778    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:14.211977    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:14.211990    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:16.725853    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:20.123985    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:20.124014    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:21.727642    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:21.727851    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:21.743894    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:21.743973    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:21.756779    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:21.756858    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:21.767935    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:21.768006    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:21.778240    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:21.778311    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:21.788704    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:21.788770    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:21.798986    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:21.799048    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:21.809310    4671 logs.go:276] 0 containers: []
	W0729 10:40:21.809323    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:21.809385    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:21.820604    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:21.820623    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:21.820630    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:21.825353    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:21.825360    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:21.864261    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:21.864271    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:21.878462    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:21.878476    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:21.912378    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:21.912391    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:21.926774    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:21.926785    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:21.938151    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:21.938164    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:21.952066    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:21.952079    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:21.963873    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:21.963884    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:21.981366    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:21.981376    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:21.992522    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:21.992534    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:22.016339    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:22.016347    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:22.027789    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:22.027799    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:22.064879    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:22.064886    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:22.093055    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:22.093067    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:22.107643    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:22.107656    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:22.123115    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:22.123125    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:25.126077    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:25.126142    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:24.636519    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:30.128267    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:30.128355    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:30.146765    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:40:30.146835    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:30.168665    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:40:30.168749    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:30.179386    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:40:30.179454    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:30.189942    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:40:30.190010    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:30.200575    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:40:30.200638    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:30.211345    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:40:30.211407    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:30.221428    4497 logs.go:276] 0 containers: []
	W0729 10:40:30.221438    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:30.221491    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:30.231494    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:40:30.231508    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:30.231513    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:40:30.264173    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:40:30.264269    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:40:30.265620    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:30.265628    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:30.270509    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:40:30.270516    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:40:30.284485    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:40:30.284496    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:40:30.295711    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:40:30.295724    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:40:30.308702    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:40:30.308716    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:40:30.319869    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:30.319883    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:30.343054    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:30.343063    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:30.380148    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:40:30.380162    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:40:30.393964    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:40:30.393975    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:40:30.405617    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:40:30.405630    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:40:30.420694    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:40:30.420711    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:40:30.438921    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:40:30.438932    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:30.450932    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:30.450946    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:40:30.450972    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:40:30.450978    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:40:30.450983    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:40:30.450989    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:30.450991    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:40:29.638883    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:29.639131    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:29.657542    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:29.657635    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:29.672963    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:29.673043    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:29.684319    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:29.684395    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:29.700196    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:29.700283    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:29.710643    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:29.710711    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:29.721348    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:29.721407    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:29.732149    4671 logs.go:276] 0 containers: []
	W0729 10:40:29.732163    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:29.732225    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:29.743182    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:29.743203    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:29.743210    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:29.755601    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:29.755618    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:29.775050    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:29.775063    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:29.794533    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:29.794544    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:29.806157    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:29.806170    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:29.810493    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:29.810503    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:29.826613    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:29.826623    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:29.838584    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:29.838597    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:29.850237    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:29.850248    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:29.873268    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:29.873276    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:29.891176    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:29.891190    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:29.916280    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:29.916298    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:29.930773    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:29.930784    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:29.945715    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:29.945726    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:29.958246    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:29.958256    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:29.995693    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:29.995701    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:30.030155    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:30.030167    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:32.543339    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:37.545764    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:37.546110    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:37.579958    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:37.580094    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:37.600966    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:37.601059    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:37.615948    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:37.616016    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:37.628679    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:37.628750    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:37.643248    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:37.643331    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:37.656508    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:37.656565    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:37.667119    4671 logs.go:276] 0 containers: []
	W0729 10:40:37.667134    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:37.667196    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:37.678031    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:37.678047    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:37.678052    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:37.716627    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:37.716640    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:37.731626    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:37.731636    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:37.746527    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:37.746538    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:37.769449    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:37.769464    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:37.773974    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:37.773980    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:37.808456    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:37.808474    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:37.834563    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:37.834580    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:37.845947    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:37.845960    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:37.857779    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:37.857790    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:37.869846    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:37.869860    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:37.887726    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:37.887738    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:37.903566    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:37.903582    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:37.915966    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:37.915977    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:37.928599    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:37.928612    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:37.943144    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:37.943158    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:37.956155    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:37.956166    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:40.454338    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:40.480867    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:45.455641    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:45.455847    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:45.474099    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:40:45.474191    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:45.488145    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:40:45.488221    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:45.500722    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:40:45.500790    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:45.512973    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:40:45.513045    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:45.524607    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:40:45.524676    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:45.536771    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:40:45.536838    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:45.548269    4497 logs.go:276] 0 containers: []
	W0729 10:40:45.548281    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:45.548339    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:45.559615    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:40:45.559633    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:45.559639    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:45.597190    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:40:45.597200    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:40:45.609977    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:40:45.609989    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:40:45.622464    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:40:45.622477    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:40:45.638872    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:40:45.638888    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:40:45.658188    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:45.658199    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:45.683750    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:45.683769    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:40:45.717630    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:40:45.717729    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:40:45.719170    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:45.719175    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:45.723937    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:40:45.723949    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:40:45.740460    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:40:45.740473    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:40:45.755292    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:40:45.755301    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:40:45.773978    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:40:45.773987    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:40:45.786645    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:40:45.786653    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:45.798713    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:45.798726    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:40:45.798752    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:40:45.798756    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:40:45.798760    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:40:45.798764    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:45.798766    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:40:45.482844    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:45.482936    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:45.496002    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:45.496080    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:45.507349    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:45.507415    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:45.518690    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:45.518757    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:45.531292    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:45.531358    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:45.542419    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:45.542490    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:45.553902    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:45.553976    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:45.565416    4671 logs.go:276] 0 containers: []
	W0729 10:40:45.565427    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:45.565481    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:45.576387    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:45.576405    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:45.576411    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:45.618274    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:45.618295    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:45.658709    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:45.658718    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:45.672280    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:45.672292    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:45.698762    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:45.698776    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:45.714915    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:45.714927    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:45.730337    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:45.730354    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:45.749771    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:45.749787    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:45.754629    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:45.754639    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:45.770988    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:45.770999    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:45.785569    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:45.785590    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:45.799379    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:45.799388    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:45.811243    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:45.811255    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:45.834989    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:45.834996    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:45.848576    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:45.848588    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:45.860498    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:45.860511    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:45.874102    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:45.874111    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:48.387411    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:53.389555    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:53.389839    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:53.408332    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:53.408429    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:53.422482    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:53.422562    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:53.434261    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:53.434347    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:53.445496    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:53.445564    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:53.456310    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:53.456380    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:53.467139    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:53.467209    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:53.477687    4671 logs.go:276] 0 containers: []
	W0729 10:40:53.477698    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:53.477754    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:53.488327    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:53.488348    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:53.488353    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:53.505653    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:53.505663    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:53.528165    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:53.528172    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:53.539731    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:53.539742    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:53.554719    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:53.554731    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:53.566669    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:53.566679    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:53.580751    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:53.580761    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:53.592167    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:53.592177    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:53.596381    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:53.596391    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:53.610491    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:53.610500    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:55.868658    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:53.635320    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:53.635331    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:53.649473    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:53.649489    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:53.665582    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:53.665592    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:53.677844    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:53.677854    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:53.689584    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:53.689595    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:53.727236    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:53.727251    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:53.762078    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:53.762090    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:56.344401    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:00.870738    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:00.871161    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:00.913666    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:41:00.913794    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:00.941785    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:41:00.941865    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:00.954782    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:41:00.954861    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:00.966406    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:41:00.966473    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:00.977443    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:41:00.977518    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:00.992316    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:41:00.992377    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:01.003143    4497 logs.go:276] 0 containers: []
	W0729 10:41:01.003155    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:01.003215    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:01.014026    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:41:01.014040    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:01.014046    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:41:01.047529    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:01.047620    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:01.048981    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:01.048987    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:01.053237    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:01.053246    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:01.087605    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:41:01.087615    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:41:01.099555    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:41:01.099568    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:41:01.110994    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:41:01.111005    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:41:01.128645    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:41:01.128657    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:41:01.140817    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:01.140828    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:01.163857    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:41:01.163866    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:01.175707    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:41:01.175719    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:41:01.190328    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:41:01.190338    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:41:01.204299    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:41:01.204310    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:41:01.216101    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:41:01.216110    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:41:01.230749    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:01.230760    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:41:01.230787    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:41:01.230791    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:01.230794    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:01.230810    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:01.230814    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:01.346616    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:01.346764    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:01.364783    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:01.364855    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:01.376987    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:01.377046    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:01.387014    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:01.387083    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:01.397336    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:01.397410    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:01.407351    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:01.407419    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:01.421113    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:01.421186    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:01.431986    4671 logs.go:276] 0 containers: []
	W0729 10:41:01.431997    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:01.432049    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:01.442431    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:01.442449    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:01.442455    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:01.454577    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:01.454587    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:01.466279    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:01.466291    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:01.504734    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:01.504745    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:01.544246    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:01.544258    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:01.569835    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:01.569845    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:01.586207    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:01.586216    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:01.599308    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:01.599320    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:01.607025    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:01.607035    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:01.618529    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:01.618542    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:01.633829    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:01.633840    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:01.652447    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:01.652458    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:01.666571    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:01.666581    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:01.681308    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:01.681318    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:01.692624    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:01.692634    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:01.704384    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:01.704399    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:01.719207    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:01.719218    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:04.245470    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:11.234714    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:09.247762    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:09.248017    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:09.269536    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:09.269664    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:09.285959    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:09.286036    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:09.298510    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:09.298580    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:09.309545    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:09.309620    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:09.319636    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:09.319704    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:09.330103    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:09.330175    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:09.339939    4671 logs.go:276] 0 containers: []
	W0729 10:41:09.339950    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:09.340007    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:09.350701    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:09.350717    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:09.350723    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:09.364478    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:09.364491    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:09.378865    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:09.378882    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:09.390362    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:09.390374    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:09.402559    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:09.402569    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:09.439942    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:09.439952    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:09.475189    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:09.475201    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:09.489922    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:09.489934    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:09.501758    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:09.501769    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:09.513068    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:09.513081    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:09.517186    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:09.517195    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:09.534687    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:09.534698    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:09.546563    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:09.546573    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:09.564740    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:09.564751    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:09.584521    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:09.584533    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:09.611580    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:09.611595    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:09.629926    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:09.629939    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:12.156239    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:16.236442    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:16.236811    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:16.268862    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:41:16.268992    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:16.288055    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:41:16.288145    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:16.302121    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:41:16.302187    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:16.313493    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:41:16.313569    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:16.324182    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:41:16.324255    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:16.334720    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:41:16.334786    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:16.352865    4497 logs.go:276] 0 containers: []
	W0729 10:41:16.352876    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:16.352941    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:16.363846    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:41:16.363862    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:16.363869    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:16.398681    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:41:16.398695    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:41:16.416337    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:16.416350    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:16.441280    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:41:16.441289    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:41:16.455505    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:41:16.455516    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:41:16.467635    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:41:16.467650    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:41:16.479491    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:41:16.479503    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:41:16.495648    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:41:16.495658    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:41:16.514067    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:16.514081    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:41:16.546347    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:16.546439    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:16.547777    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:16.547782    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:16.552292    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:41:16.552297    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:41:16.566357    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:41:16.566369    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:41:16.578290    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:41:16.578300    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:16.590342    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:16.590353    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:41:16.590378    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:41:16.590383    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:16.590400    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:16.590404    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:16.590407    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:17.158419    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:17.158571    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:17.169235    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:17.169309    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:17.179922    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:17.179985    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:17.190683    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:17.190753    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:17.201185    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:17.201258    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:17.211684    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:17.211758    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:17.222184    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:17.222251    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:17.232507    4671 logs.go:276] 0 containers: []
	W0729 10:41:17.232520    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:17.232575    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:17.242861    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:17.242889    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:17.242896    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:17.277312    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:17.277325    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:17.292399    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:17.292409    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:17.316396    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:17.316403    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:17.327832    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:17.327844    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:17.342669    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:17.342680    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:17.354494    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:17.354505    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:17.393200    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:17.393206    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:17.397239    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:17.397247    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:17.415251    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:17.415262    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:17.427089    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:17.427102    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:17.438266    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:17.438276    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:17.452566    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:17.452577    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:17.477565    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:17.477576    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:17.489009    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:17.489022    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:17.506420    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:17.506433    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:17.520464    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:17.520475    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:20.034066    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:26.594370    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:25.036215    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:25.036482    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:25.056754    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:25.056845    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:25.071358    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:25.071442    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:25.083416    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:25.083489    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:25.094428    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:25.094501    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:25.104800    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:25.104862    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:25.115865    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:25.115930    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:25.128583    4671 logs.go:276] 0 containers: []
	W0729 10:41:25.128598    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:25.128664    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:25.140530    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:25.140550    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:25.140555    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:25.151846    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:25.151858    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:25.174542    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:25.174549    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:25.192117    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:25.192130    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:25.203754    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:25.203765    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:25.217843    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:25.217853    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:25.229304    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:25.229316    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:25.243704    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:25.243714    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:25.278138    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:25.278151    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:25.300347    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:25.300358    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:25.311573    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:25.311584    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:25.324385    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:25.324396    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:25.336262    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:25.336271    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:25.356592    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:25.356602    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:25.395751    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:25.395759    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:25.399850    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:25.399857    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:25.424980    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:25.424990    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:27.952688    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:31.596453    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:31.596653    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:31.615911    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:41:31.616000    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:31.630476    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:41:31.630550    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:31.643008    4497 logs.go:276] 2 containers: [d43e4d4e905e b67afba30dbd]
	I0729 10:41:31.643074    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:31.657579    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:41:31.657650    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:31.668928    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:41:31.668991    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:31.680736    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:41:31.680804    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:31.693355    4497 logs.go:276] 0 containers: []
	W0729 10:41:31.693364    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:31.693417    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:31.703673    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:41:31.703690    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:41:31.703696    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:41:31.717621    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:41:31.717632    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:41:31.734842    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:41:31.734854    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:41:31.748349    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:41:31.748361    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:41:31.772027    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:41:31.772038    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:41:31.783589    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:31.783601    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:41:31.815454    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:31.815547    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:31.816891    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:31.816903    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:31.821046    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:41:31.821054    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:41:31.839143    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:41:31.839154    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:31.850270    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:31.850285    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:31.875642    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:31.875654    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:31.910412    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:41:31.910426    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:41:31.922184    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:41:31.922196    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:41:31.936714    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:31.936727    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:41:31.936752    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:41:31.936757    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:31.936768    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:31.936773    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:31.936776    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:32.954813    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:32.955087    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:32.981331    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:32.981434    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:32.999521    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:32.999602    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:33.012978    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:33.013039    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:33.027672    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:33.027741    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:33.038151    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:33.038211    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:33.048355    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:33.048423    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:33.058333    4671 logs.go:276] 0 containers: []
	W0729 10:41:33.058344    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:33.058400    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:33.068640    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:33.068657    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:33.068663    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:33.072902    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:33.072908    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:33.086972    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:33.086981    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:33.100993    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:33.101004    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:33.112241    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:33.112255    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:33.127338    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:33.127347    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:33.140477    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:33.140488    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:33.164892    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:33.164906    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:33.177227    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:33.177238    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:33.191810    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:33.191821    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:33.211626    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:33.211635    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:33.223733    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:33.223744    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:33.246193    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:33.246202    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:33.284183    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:33.284191    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:33.318916    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:33.318927    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:33.336807    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:33.336818    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:33.350691    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:33.350703    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:35.863854    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:41.940634    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:40.866021    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:40.866402    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:40.900638    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:40.900766    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:40.919337    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:40.919432    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:40.936012    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:40.936084    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:40.947844    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:40.947919    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:40.958523    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:40.958593    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:40.980106    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:40.980179    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:40.989938    4671 logs.go:276] 0 containers: []
	W0729 10:41:40.989952    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:40.990012    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:41.000693    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:41.000710    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:41.000716    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:41.004773    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:41.004783    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:41.029471    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:41.029484    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:41.052796    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:41.052804    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:41.063852    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:41.063863    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:41.101037    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:41.101044    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:41.114690    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:41.114706    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:41.129317    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:41.129327    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:41.142168    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:41.142183    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:41.156409    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:41.156419    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:41.176062    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:41.176072    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:41.193631    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:41.193644    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:41.205921    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:41.205934    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:41.217584    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:41.217599    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:41.257559    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:41.257570    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:41.280188    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:41.280203    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:41.300622    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:41.300635    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:46.942851    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:46.943290    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:46.983594    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:41:46.983735    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:47.005221    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:41:47.005340    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:47.027301    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:41:47.027383    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:47.039442    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:41:47.039507    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:47.050576    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:41:47.050645    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:47.061838    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:41:47.061906    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:47.071867    4497 logs.go:276] 0 containers: []
	W0729 10:41:47.071878    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:47.071938    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:47.082579    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:41:47.082595    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:41:47.082599    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:47.095394    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:47.095405    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:41:47.128517    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:47.128608    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:47.129965    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:47.129971    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:47.134737    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:47.134744    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:47.169236    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:41:47.169248    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:41:47.181723    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:41:47.181734    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:41:47.200833    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:41:47.200842    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:41:47.215387    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:41:47.215397    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:41:47.230564    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:41:47.230575    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:41:47.242679    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:41:47.242693    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:41:47.256962    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:41:47.256971    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:41:47.269286    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:41:47.269298    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:41:47.281701    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:41:47.281713    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:41:47.295736    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:41:47.295747    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:41:43.820295    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:47.313367    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:47.313379    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:47.337410    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:47.337418    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:41:47.337441    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:41:47.337446    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:41:47.337449    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:41:47.337453    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:47.337456    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:48.822455    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:48.822867    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:48.854526    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:48.854664    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:48.874310    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:48.874413    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:48.889980    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:48.890047    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:48.901994    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:48.902066    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:48.912749    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:48.912813    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:48.923740    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:48.923808    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:48.934431    4671 logs.go:276] 0 containers: []
	W0729 10:41:48.934445    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:48.934502    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:48.945311    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:48.945330    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:48.945335    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:48.950155    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:48.950164    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:48.966298    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:48.966309    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:48.989644    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:48.989659    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:49.001023    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:49.001035    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:49.015259    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:49.015271    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:49.027018    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:49.027030    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:49.038832    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:49.038847    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:49.056730    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:49.056741    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:49.070512    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:49.070526    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:49.110514    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:49.110523    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:49.135296    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:49.135307    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:49.149498    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:49.149511    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:49.162835    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:49.162846    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:49.197077    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:49.197091    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:49.211447    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:49.211458    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:49.226778    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:49.226789    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:51.741094    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:56.743453    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:56.743832    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:56.780914    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:56.781054    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:56.802280    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:56.802378    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:56.820117    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:56.820186    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:56.832215    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:56.832290    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:56.842921    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:56.842981    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:56.853226    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:56.853311    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:56.865596    4671 logs.go:276] 0 containers: []
	W0729 10:41:56.865607    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:56.865670    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:56.881232    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:56.881251    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:56.881256    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:56.896638    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:56.896648    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:56.931262    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:56.931274    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:56.946340    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:56.946350    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:56.958104    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:56.958114    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:56.981094    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:56.981102    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:56.994824    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:56.994834    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:57.006847    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:57.006858    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:57.032075    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:57.032086    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:57.043818    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:57.043829    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:57.060591    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:57.060601    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:57.097701    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:57.097710    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:57.101688    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:57.101694    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:57.115812    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:57.115821    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:57.126782    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:57.126792    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:57.140071    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:57.140080    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:57.154552    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:57.154563    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:57.341248    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:59.666448    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:02.343392    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:02.343802    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:02.378751    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:42:02.378879    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:02.405112    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:42:02.405189    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:02.420216    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:42:02.420293    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:02.431912    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:42:02.431978    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:02.442352    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:42:02.442423    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:02.453186    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:42:02.453258    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:02.463854    4497 logs.go:276] 0 containers: []
	W0729 10:42:02.463864    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:02.463922    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:02.474795    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:42:02.474813    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:42:02.474819    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:42:02.494525    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:42:02.494534    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:42:02.512490    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:02.512501    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:02.537735    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:42:02.537743    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:02.549930    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:02.549941    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:02.554630    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:42:02.554639    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:42:02.569385    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:42:02.569396    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:42:02.581246    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:02.581260    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:42:02.615446    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:02.615544    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:02.616934    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:42:02.616940    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:42:02.629108    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:42:02.629121    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:42:02.640459    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:42:02.640469    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:42:02.654560    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:42:02.654574    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:42:02.666867    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:42:02.666881    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:42:02.679221    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:02.679236    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:02.715330    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:42:02.715345    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:42:02.726535    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:02.726546    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:42:02.726573    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:42:02.726580    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:02.726583    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:02.726621    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:02.726638    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:04.668786    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:04.669121    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:04.704626    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:42:04.704736    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:04.724022    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:42:04.724103    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:04.736794    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:42:04.736858    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:04.747460    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:42:04.747532    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:04.763577    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:42:04.763645    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:04.774507    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:42:04.774569    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:04.784468    4671 logs.go:276] 0 containers: []
	W0729 10:42:04.784482    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:04.784536    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:04.794795    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:42:04.794812    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:42:04.794817    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:42:04.812127    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:04.812138    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:04.837937    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:04.837947    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:42:04.877159    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:42:04.877170    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:42:04.892134    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:42:04.892148    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:42:04.903995    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:42:04.904006    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:42:04.923104    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:42:04.923116    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:42:04.934398    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:42:04.934409    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:42:04.948958    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:42:04.948969    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:42:04.963621    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:42:04.963631    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:42:04.979348    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:42:04.979358    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:04.990669    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:04.990680    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:04.994941    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:04.994952    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:05.029931    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:42:05.029942    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:42:05.054967    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:42:05.054977    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:42:05.070563    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:42:05.070576    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:42:05.081839    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:42:05.081850    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:42:07.594679    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:12.596789    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:12.597022    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:12.616533    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:42:12.616649    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:12.631071    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:42:12.631141    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:12.642581    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:42:12.642648    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:12.653110    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:42:12.653196    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:12.663962    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:42:12.664033    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:12.674413    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:42:12.674486    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:12.685107    4671 logs.go:276] 0 containers: []
	W0729 10:42:12.685121    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:12.685178    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:12.697294    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:42:12.697312    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:42:12.697318    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:12.709489    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:42:12.709500    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:42:12.734360    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:42:12.734369    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:42:12.745835    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:42:12.745847    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:42:12.758245    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:42:12.758260    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:42:12.773235    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:42:12.773250    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:42:12.784503    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:42:12.784513    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:42:12.795648    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:12.795662    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:42:12.833061    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:12.833075    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:12.867357    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:42:12.867369    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:42:12.882417    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:42:12.882428    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:42:12.899810    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:42:12.899826    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:42:12.914484    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:12.914494    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:12.935827    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:12.935835    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:12.939919    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:42:12.939928    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:42:12.959766    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:42:12.959779    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:42:12.974286    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:42:12.974297    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:42:12.728641    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:15.487558    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:20.489778    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:20.489873    4671 kubeadm.go:597] duration metric: took 4m3.593169292s to restartPrimaryControlPlane
	W0729 10:42:20.489928    4671 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 10:42:20.489952    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 10:42:21.534981    4671 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.045045666s)
	I0729 10:42:21.535060    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:42:21.539851    4671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:42:21.542514    4671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:42:21.545368    4671 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:42:21.545374    4671 kubeadm.go:157] found existing configuration files:
	
	I0729 10:42:21.545405    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf
	I0729 10:42:21.548167    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:42:21.548194    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:42:21.550717    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf
	I0729 10:42:21.553392    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:42:21.553414    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:42:21.556622    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf
	I0729 10:42:21.559846    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:42:21.559869    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:42:21.562384    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf
	I0729 10:42:21.565128    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:42:21.565150    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:42:21.568173    4671 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:42:21.585270    4671 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 10:42:21.585374    4671 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:42:21.633076    4671 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:42:21.633134    4671 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:42:21.633179    4671 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:42:21.683381    4671 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:42:21.687531    4671 out.go:204]   - Generating certificates and keys ...
	I0729 10:42:21.687568    4671 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:42:21.687612    4671 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:42:21.687667    4671 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 10:42:21.687699    4671 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 10:42:21.687742    4671 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 10:42:21.687769    4671 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 10:42:21.687800    4671 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 10:42:21.687833    4671 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 10:42:21.687874    4671 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 10:42:21.687922    4671 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 10:42:21.687951    4671 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 10:42:21.687981    4671 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:42:21.784077    4671 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:42:21.880964    4671 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:42:21.976287    4671 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:42:22.033454    4671 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:42:22.062827    4671 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:42:22.063183    4671 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:42:22.063222    4671 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:42:22.151907    4671 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:42:17.730705    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:17.730850    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:17.743893    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:42:17.743963    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:17.755486    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:42:17.755553    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:17.766338    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:42:17.766412    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:17.777643    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:42:17.777708    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:17.789032    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:42:17.789102    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:17.799890    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:42:17.799958    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:17.810098    4497 logs.go:276] 0 containers: []
	W0729 10:42:17.810109    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:17.810163    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:17.820473    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:42:17.820492    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:17.820497    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:42:17.852783    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:17.852875    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:17.854212    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:42:17.854217    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:42:17.865586    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:42:17.865598    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:17.878501    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:17.878517    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:17.914103    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:42:17.914116    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:42:17.928114    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:42:17.928124    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:42:17.942416    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:42:17.942429    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:42:17.959915    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:17.959923    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:17.964642    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:42:17.964648    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:42:17.981734    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:42:17.981743    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:42:17.994171    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:42:17.994181    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:42:18.006008    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:42:18.006018    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:42:18.021721    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:42:18.021738    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:42:18.038282    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:42:18.038296    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:42:18.050367    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:18.050378    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:18.076049    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:18.076066    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:42:18.076099    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:42:18.076112    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:18.076118    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:18.076123    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:18.076126    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:22.155823    4671 out.go:204]   - Booting up control plane ...
	I0729 10:42:22.155873    4671 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:42:22.155922    4671 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:42:22.155975    4671 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:42:22.156018    4671 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:42:22.156098    4671 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 10:42:26.654014    4671 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.500996 seconds
	I0729 10:42:26.654075    4671 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:42:26.658022    4671 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:42:27.175118    4671 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:42:27.175425    4671 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-396000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:42:27.678086    4671 kubeadm.go:310] [bootstrap-token] Using token: xjj04q.3qhbk0y1mpomvu5q
	I0729 10:42:27.684260    4671 out.go:204]   - Configuring RBAC rules ...
	I0729 10:42:27.684334    4671 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:42:27.684383    4671 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:42:27.686071    4671 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:42:27.690868    4671 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:42:27.691898    4671 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:42:27.692753    4671 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:42:27.695863    4671 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:42:27.864969    4671 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:42:28.082936    4671 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:42:28.083451    4671 kubeadm.go:310] 
	I0729 10:42:28.083488    4671 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:42:28.083491    4671 kubeadm.go:310] 
	I0729 10:42:28.083537    4671 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:42:28.083540    4671 kubeadm.go:310] 
	I0729 10:42:28.083552    4671 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:42:28.083582    4671 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:42:28.083614    4671 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:42:28.083619    4671 kubeadm.go:310] 
	I0729 10:42:28.083650    4671 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:42:28.083653    4671 kubeadm.go:310] 
	I0729 10:42:28.083684    4671 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:42:28.083691    4671 kubeadm.go:310] 
	I0729 10:42:28.083717    4671 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:42:28.083753    4671 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:42:28.083791    4671 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:42:28.083794    4671 kubeadm.go:310] 
	I0729 10:42:28.083836    4671 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:42:28.083888    4671 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:42:28.083891    4671 kubeadm.go:310] 
	I0729 10:42:28.083945    4671 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xjj04q.3qhbk0y1mpomvu5q \
	I0729 10:42:28.083997    4671 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e543544bbdf55d58d5e8ecb84a321dadc33a389aefb88a9b79f2e5e89d2eeaba \
	I0729 10:42:28.084009    4671 kubeadm.go:310] 	--control-plane 
	I0729 10:42:28.084012    4671 kubeadm.go:310] 
	I0729 10:42:28.084064    4671 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:42:28.084068    4671 kubeadm.go:310] 
	I0729 10:42:28.084109    4671 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xjj04q.3qhbk0y1mpomvu5q \
	I0729 10:42:28.084156    4671 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e543544bbdf55d58d5e8ecb84a321dadc33a389aefb88a9b79f2e5e89d2eeaba 
	I0729 10:42:28.084267    4671 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:42:28.084343    4671 cni.go:84] Creating CNI manager for ""
	I0729 10:42:28.084352    4671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:42:28.091987    4671 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:42:28.096031    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:42:28.099465    4671 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:42:28.104534    4671 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:42:28.104607    4671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:42:28.104658    4671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-396000 minikube.k8s.io/updated_at=2024_07_29T10_42_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=stopped-upgrade-396000 minikube.k8s.io/primary=true
	I0729 10:42:28.149105    4671 ops.go:34] apiserver oom_adj: -16
	I0729 10:42:28.149111    4671 kubeadm.go:1113] duration metric: took 44.546584ms to wait for elevateKubeSystemPrivileges
	I0729 10:42:28.149126    4671 kubeadm.go:394] duration metric: took 4m11.265467167s to StartCluster
	I0729 10:42:28.149137    4671 settings.go:142] acquiring lock: {Name:mk00a8a4362ef98c344b6c02e7313691374680e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:28.149226    4671 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:42:28.149622    4671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/kubeconfig: {Name:mk69e1ff39ac907f2664a3f00c50d678e5bdc356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:28.149820    4671 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:42:28.149904    4671 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:42:28.149914    4671 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:42:28.149977    4671 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-396000"
	I0729 10:42:28.149992    4671 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-396000"
	I0729 10:42:28.149993    4671 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-396000"
	W0729 10:42:28.149996    4671 addons.go:243] addon storage-provisioner should already be in state true
	I0729 10:42:28.150002    4671 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-396000"
	I0729 10:42:28.150008    4671 host.go:66] Checking if "stopped-upgrade-396000" exists ...
	I0729 10:42:28.150442    4671 retry.go:31] will retry after 1.184300369s: connect: dial unix /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/monitor: connect: connection refused
	I0729 10:42:28.153982    4671 out.go:177] * Verifying Kubernetes components...
	I0729 10:42:28.161916    4671 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:42:28.166073    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:42:28.169029    4671 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:42:28.169036    4671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:42:28.169042    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:42:28.253865    4671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:42:28.258956    4671 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:42:28.258996    4671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:42:28.262986    4671 api_server.go:72] duration metric: took 113.159166ms to wait for apiserver process to appear ...
	I0729 10:42:28.262995    4671 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:42:28.263001    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:28.274907    4671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:42:28.079921    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:29.336020    4671 kapi.go:59] client config for stopped-upgrade-396000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.key", CAFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044f80c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:42:29.336165    4671 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-396000"
	W0729 10:42:29.336171    4671 addons.go:243] addon default-storageclass should already be in state true
	I0729 10:42:29.336184    4671 host.go:66] Checking if "stopped-upgrade-396000" exists ...
	I0729 10:42:29.336765    4671 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:42:29.336771    4671 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:42:29.336777    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:42:29.365369    4671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:42:33.264171    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:33.264189    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:33.081999    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:33.082267    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:33.104578    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:42:33.104691    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:33.120245    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:42:33.120316    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:33.149269    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:42:33.149347    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:33.164820    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:42:33.164889    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:33.175695    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:42:33.175758    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:33.185960    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:42:33.186021    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:33.195877    4497 logs.go:276] 0 containers: []
	W0729 10:42:33.195889    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:33.195949    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:33.206076    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:42:33.206093    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:42:33.206097    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:42:33.220581    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:42:33.220593    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:42:33.234440    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:42:33.234454    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:42:33.245801    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:42:33.245811    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:42:33.257446    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:42:33.257458    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:42:33.272914    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:33.272923    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:33.296877    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:33.296886    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:33.330542    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:42:33.330553    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:42:33.343833    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:42:33.343843    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:42:33.355660    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:42:33.355669    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:42:33.367157    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:42:33.367168    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:33.379369    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:33.379379    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:42:33.412768    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:33.412860    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:33.414283    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:33.414289    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:33.419562    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:42:33.419572    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:42:33.431113    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:42:33.431123    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:42:33.449980    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:33.449990    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:42:33.450019    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:42:33.450023    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:33.450026    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:33.450037    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:33.450042    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:38.264781    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:38.264823    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:43.264938    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:43.264964    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:43.453849    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:48.265117    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:48.265174    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:48.455923    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:48.456067    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:48.468133    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:42:48.468211    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:48.478538    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:42:48.478619    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:48.492264    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:42:48.492338    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:48.502595    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:42:48.502660    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:48.515581    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:42:48.515642    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:48.526236    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:42:48.526303    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:48.536085    4497 logs.go:276] 0 containers: []
	W0729 10:42:48.536099    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:48.536148    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:48.546687    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:42:48.546703    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:42:48.546710    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:42:48.560404    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:42:48.560418    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:42:48.572526    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:48.572539    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:42:48.606507    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:48.606598    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:48.607963    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:48.607968    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:48.612318    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:42:48.612327    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:42:48.623950    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:42:48.623961    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:42:48.639136    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:42:48.639149    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:42:48.652143    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:48.652153    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:48.677212    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:42:48.677222    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:48.688877    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:48.688891    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:48.723838    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:42:48.723850    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:42:48.736154    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:42:48.736168    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:42:48.753576    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:42:48.753589    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:42:48.767106    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:42:48.767120    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:42:48.781423    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:42:48.781437    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:42:48.796322    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:48.796336    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:42:48.796363    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:42:48.796368    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:42:48.796371    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:42:48.796377    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:48.796380    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:53.265489    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:53.265530    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:58.266024    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:58.266067    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 10:42:59.426094    4671 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 10:42:59.430554    4671 out.go:177] * Enabled addons: storage-provisioner
	I0729 10:42:58.800186    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:59.442424    4671 addons.go:510] duration metric: took 31.293490375s for enable addons: enabled=[storage-provisioner]
	I0729 10:43:03.266683    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:03.266744    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:03.802302    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:03.802447    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:03.817295    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:43:03.817372    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:03.828830    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:43:03.828898    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:03.839756    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:43:03.839818    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:03.850115    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:43:03.850184    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:03.860488    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:43:03.860557    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:03.871207    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:43:03.871271    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:03.883856    4497 logs.go:276] 0 containers: []
	W0729 10:43:03.883868    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:03.883920    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:03.894920    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:43:03.894935    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:03.894941    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:43:03.929562    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:43:03.929655    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:43:03.931050    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:43:03.931055    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:43:03.942676    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:43:03.942686    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:03.954706    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:03.954717    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:03.959165    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:43:03.959174    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:43:03.970802    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:43:03.970817    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:43:03.985159    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:03.985171    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:04.010807    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:04.010823    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:04.045505    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:43:04.045515    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:43:04.059818    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:43:04.059831    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:43:04.071379    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:43:04.071389    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:43:04.088750    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:43:04.088759    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:43:04.106304    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:43:04.106317    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:43:04.120348    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:43:04.120361    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:43:04.136688    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:43:04.136701    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:43:04.148556    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:04.148567    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:43:04.148595    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:43:04.148599    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:43:04.148603    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:43:04.148607    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:04.148609    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:43:08.268097    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:08.268144    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:13.269393    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:13.269430    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:14.152444    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:18.271033    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:18.271082    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:19.154600    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:19.154832    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:19.179722    4497 logs.go:276] 1 containers: [7b7891e29a8d]
	I0729 10:43:19.179822    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:19.196370    4497 logs.go:276] 1 containers: [269687809c54]
	I0729 10:43:19.196483    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:19.209695    4497 logs.go:276] 4 containers: [b514acbb16d2 641fdd39cb5f d43e4d4e905e b67afba30dbd]
	I0729 10:43:19.209771    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:19.221005    4497 logs.go:276] 1 containers: [b1a6466f958e]
	I0729 10:43:19.221070    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:19.231692    4497 logs.go:276] 1 containers: [80c49af6ca89]
	I0729 10:43:19.231766    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:19.242340    4497 logs.go:276] 1 containers: [378937110bea]
	I0729 10:43:19.242416    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:19.253483    4497 logs.go:276] 0 containers: []
	W0729 10:43:19.253497    4497 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:19.253569    4497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:19.263969    4497 logs.go:276] 1 containers: [f6528dc8e174]
	I0729 10:43:19.263986    4497 logs.go:123] Gathering logs for coredns [b67afba30dbd] ...
	I0729 10:43:19.263991    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67afba30dbd"
	I0729 10:43:19.275679    4497 logs.go:123] Gathering logs for kube-scheduler [b1a6466f958e] ...
	I0729 10:43:19.275689    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a6466f958e"
	I0729 10:43:19.290100    4497 logs.go:123] Gathering logs for kube-proxy [80c49af6ca89] ...
	I0729 10:43:19.290111    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c49af6ca89"
	I0729 10:43:19.301376    4497 logs.go:123] Gathering logs for kube-controller-manager [378937110bea] ...
	I0729 10:43:19.301387    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 378937110bea"
	I0729 10:43:19.319236    4497 logs.go:123] Gathering logs for storage-provisioner [f6528dc8e174] ...
	I0729 10:43:19.319248    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6528dc8e174"
	I0729 10:43:19.337410    4497 logs.go:123] Gathering logs for kube-apiserver [7b7891e29a8d] ...
	I0729 10:43:19.337421    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7891e29a8d"
	I0729 10:43:19.351564    4497 logs.go:123] Gathering logs for etcd [269687809c54] ...
	I0729 10:43:19.351574    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 269687809c54"
	I0729 10:43:19.365953    4497 logs.go:123] Gathering logs for coredns [b514acbb16d2] ...
	I0729 10:43:19.365965    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514acbb16d2"
	I0729 10:43:19.377673    4497 logs.go:123] Gathering logs for coredns [d43e4d4e905e] ...
	I0729 10:43:19.377685    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d43e4d4e905e"
	I0729 10:43:19.389396    4497 logs.go:123] Gathering logs for container status ...
	I0729 10:43:19.389406    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:19.401432    4497 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:19.401443    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:19.405789    4497 logs.go:123] Gathering logs for coredns [641fdd39cb5f] ...
	I0729 10:43:19.405795    4497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fdd39cb5f"
	I0729 10:43:19.416998    4497 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:19.417011    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:43:19.449738    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:43:19.449838    4497 logs.go:138] Found kubelet problem: Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:43:19.451272    4497 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:19.451280    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:19.506876    4497 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:19.506887    4497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:19.530308    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:19.530317    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:43:19.530341    4497 out.go:239] X Problems detected in kubelet:
	W0729 10:43:19.530345    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	W0729 10:43:19.530348    4497 out.go:239]   Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	I0729 10:43:19.530353    4497 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:19.530373    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:43:23.271292    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:23.271336    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:28.273456    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:28.273569    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:28.287881    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:43:28.287940    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:28.298859    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:43:28.298926    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:28.309204    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:43:28.309270    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:28.319463    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:43:28.319526    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:28.329729    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:43:28.329797    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:28.340707    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:43:28.340776    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:28.351176    4671 logs.go:276] 0 containers: []
	W0729 10:43:28.351190    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:28.351249    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:28.361343    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:43:28.361357    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:43:28.361362    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:43:28.375897    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:43:28.375910    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:43:28.392592    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:43:28.392602    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:43:28.407227    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:43:28.407237    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:43:28.425418    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:43:28.425427    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:43:28.436209    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:43:28.436220    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:28.449820    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:28.449831    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:43:28.488622    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:28.488637    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:28.492864    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:28.492872    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:28.527895    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:43:28.527907    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:43:28.541992    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:43:28.542003    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:43:28.553829    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:43:28.553843    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:43:28.569401    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:28.569413    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:29.534169    4497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:31.097001    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:34.536309    4497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:34.541020    4497 out.go:177] 
	W0729 10:43:34.544066    4497 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 10:43:34.544071    4497 out.go:239] * 
	W0729 10:43:34.544540    4497 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:43:34.559767    4497 out.go:177] 
	I0729 10:43:36.099543    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:36.099729    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:36.114001    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:43:36.114084    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:36.125915    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:43:36.125992    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:36.137335    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:43:36.137410    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:36.147417    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:43:36.147477    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:36.157597    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:43:36.157670    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:36.167345    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:43:36.167404    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:36.177535    4671 logs.go:276] 0 containers: []
	W0729 10:43:36.177550    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:36.177609    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:36.190402    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:43:36.190418    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:43:36.190424    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:43:36.202228    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:43:36.202240    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:43:36.219219    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:36.219229    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:36.242609    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:36.242620    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:43:36.278535    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:36.278544    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:36.282446    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:36.282455    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:36.317359    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:43:36.317369    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:43:36.331446    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:43:36.331456    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:43:36.345630    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:43:36.345644    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:36.357238    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:43:36.357251    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:43:36.371853    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:43:36.371865    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:43:36.386950    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:43:36.386962    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:43:36.398118    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:43:36.398131    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:43:38.911402    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:43.913582    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:43.913748    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:43.930020    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:43:43.930103    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:43.944058    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:43:43.944128    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:43.954834    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:43:43.954895    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:43.966706    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:43:43.966766    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:43.977629    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:43:43.977694    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:43.990328    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:43:43.990384    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:43.999815    4671 logs.go:276] 0 containers: []
	W0729 10:43:43.999826    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:43.999878    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:44.010409    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:43:44.010424    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:43:44.010430    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:43:44.025499    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:43:44.025509    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:43:44.037115    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:43:44.037124    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:43:44.055106    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:44.055118    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:43:44.094290    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:44.094301    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:44.098425    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:44.098433    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:44.131779    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:43:44.131791    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:43:44.153251    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:43:44.153264    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:43:44.164901    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:43:44.164911    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:43:44.179149    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:43:44.179161    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:43:44.190605    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:43:44.190615    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:43:44.202100    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:44.202110    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:44.225458    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:43:44.225464    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:46.738452    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 17:34:24 UTC, ends at Mon 2024-07-29 17:43:50 UTC. --
	Jul 29 17:43:31 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:31Z" level=error msg="ContainerStats resp: {0x40007ad340 linux}"
	Jul 29 17:43:31 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:31Z" level=error msg="ContainerStats resp: {0x40005a7f80 linux}"
	Jul 29 17:43:31 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:31Z" level=error msg="ContainerStats resp: {0x4000926600 linux}"
	Jul 29 17:43:32 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:32Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 17:43:32 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:32Z" level=error msg="ContainerStats resp: {0x400084b140 linux}"
	Jul 29 17:43:33 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:33Z" level=error msg="ContainerStats resp: {0x40007ada80 linux}"
	Jul 29 17:43:33 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:33Z" level=error msg="ContainerStats resp: {0x40008b6a80 linux}"
	Jul 29 17:43:33 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:33Z" level=error msg="ContainerStats resp: {0x4000932300 linux}"
	Jul 29 17:43:33 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:33Z" level=error msg="ContainerStats resp: {0x4000932440 linux}"
	Jul 29 17:43:33 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:33Z" level=error msg="ContainerStats resp: {0x40004ec040 linux}"
	Jul 29 17:43:33 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:33Z" level=error msg="ContainerStats resp: {0x40004eca40 linux}"
	Jul 29 17:43:33 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:33Z" level=error msg="ContainerStats resp: {0x40008b6640 linux}"
	Jul 29 17:43:37 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 17:43:42 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 17:43:43 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:43Z" level=error msg="ContainerStats resp: {0x40007acfc0 linux}"
	Jul 29 17:43:43 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:43Z" level=error msg="ContainerStats resp: {0x40007ad100 linux}"
	Jul 29 17:43:44 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:44Z" level=error msg="ContainerStats resp: {0x400084a700 linux}"
	Jul 29 17:43:45 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:45Z" level=error msg="ContainerStats resp: {0x400084b040 linux}"
	Jul 29 17:43:45 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:45Z" level=error msg="ContainerStats resp: {0x4000928740 linux}"
	Jul 29 17:43:45 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:45Z" level=error msg="ContainerStats resp: {0x4000928bc0 linux}"
	Jul 29 17:43:45 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:45Z" level=error msg="ContainerStats resp: {0x4000929280 linux}"
	Jul 29 17:43:45 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:45Z" level=error msg="ContainerStats resp: {0x400084bdc0 linux}"
	Jul 29 17:43:45 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:45Z" level=error msg="ContainerStats resp: {0x40004ec1c0 linux}"
	Jul 29 17:43:45 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:45Z" level=error msg="ContainerStats resp: {0x40004ec580 linux}"
	Jul 29 17:43:47 running-upgrade-466000 cri-dockerd[3117]: time="2024-07-29T17:43:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	67fc407281ab8       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   cdae7feacfbd1
	7d89b2a8a92ce       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   19f44ca5a5d7a
	b514acbb16d23       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   19f44ca5a5d7a
	641fdd39cb5f1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   cdae7feacfbd1
	80c49af6ca89e       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   084fdc02c183a
	f6528dc8e1743       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   32655ac99d06f
	378937110bea0       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   5c4c2097897f7
	b1a6466f958e2       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   f4f9576911312
	7b7891e29a8de       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   83dc9a43453b8
	269687809c543       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   52d13c7c79473
	
	
	==> coredns [641fdd39cb5f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6306515389591880778.7767187483260966568. HINFO: read udp 10.244.0.3:41212->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6306515389591880778.7767187483260966568. HINFO: read udp 10.244.0.3:34436->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6306515389591880778.7767187483260966568. HINFO: read udp 10.244.0.3:45468->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6306515389591880778.7767187483260966568. HINFO: read udp 10.244.0.3:42195->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [67fc407281ab] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 821036317217357231.6306472067248373728. HINFO: read udp 10.244.0.3:58132->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 821036317217357231.6306472067248373728. HINFO: read udp 10.244.0.3:44282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 821036317217357231.6306472067248373728. HINFO: read udp 10.244.0.3:59663->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 821036317217357231.6306472067248373728. HINFO: read udp 10.244.0.3:58894->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 821036317217357231.6306472067248373728. HINFO: read udp 10.244.0.3:53969->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7d89b2a8a92c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 541051427859590049.4355061531232887054. HINFO: read udp 10.244.0.2:34386->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 541051427859590049.4355061531232887054. HINFO: read udp 10.244.0.2:53005->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 541051427859590049.4355061531232887054. HINFO: read udp 10.244.0.2:54531->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 541051427859590049.4355061531232887054. HINFO: read udp 10.244.0.2:52520->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 541051427859590049.4355061531232887054. HINFO: read udp 10.244.0.2:45942->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b514acbb16d2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:48993->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:50609->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:60957->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:54503->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:41119->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:53020->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:36697->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:44047->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:51187->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2113720620402777299.5932207472527733922. HINFO: read udp 10.244.0.2:37033->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-466000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-466000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899
	                    minikube.k8s.io/name=running-upgrade-466000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_39_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:39:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-466000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:43:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:39:30 +0000   Mon, 29 Jul 2024 17:39:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:39:30 +0000   Mon, 29 Jul 2024 17:39:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:39:30 +0000   Mon, 29 Jul 2024 17:39:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:39:30 +0000   Mon, 29 Jul 2024 17:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-466000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 93a9aa3f984c4a5299cabce1b3ef7f60
	  System UUID:                93a9aa3f984c4a5299cabce1b3ef7f60
	  Boot ID:                    6506b2bc-cc33-400f-8f50-4cf9474408c9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-js9lh                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 coredns-6d4b75cb6d-p8b27                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 etcd-running-upgrade-466000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-apiserver-running-upgrade-466000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-466000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-proxy-jlzr8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-scheduler-running-upgrade-466000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  NodeReady                4m20s  kubelet          Node running-upgrade-466000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s  kubelet          Node running-upgrade-466000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s  kubelet          Node running-upgrade-466000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s  kubelet          Node running-upgrade-466000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-466000 event: Registered Node running-upgrade-466000 in Controller
	
	
	==> dmesg <==
	[  +1.707275] systemd-fstab-generator[872]: Ignoring "noauto" for root device
	[  +0.065064] systemd-fstab-generator[883]: Ignoring "noauto" for root device
	[  +0.082265] systemd-fstab-generator[894]: Ignoring "noauto" for root device
	[  +1.141388] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085576] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +0.083690] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +2.842126] systemd-fstab-generator[1283]: Ignoring "noauto" for root device
	[Jul29 17:35] systemd-fstab-generator[2006]: Ignoring "noauto" for root device
	[  +2.526053] systemd-fstab-generator[2283]: Ignoring "noauto" for root device
	[  +0.138530] systemd-fstab-generator[2319]: Ignoring "noauto" for root device
	[  +0.098683] systemd-fstab-generator[2332]: Ignoring "noauto" for root device
	[  +0.096202] systemd-fstab-generator[2345]: Ignoring "noauto" for root device
	[  +2.590202] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.199735] systemd-fstab-generator[3073]: Ignoring "noauto" for root device
	[  +0.080894] systemd-fstab-generator[3085]: Ignoring "noauto" for root device
	[  +0.088665] systemd-fstab-generator[3096]: Ignoring "noauto" for root device
	[  +0.095139] systemd-fstab-generator[3110]: Ignoring "noauto" for root device
	[  +2.220476] systemd-fstab-generator[3261]: Ignoring "noauto" for root device
	[  +3.533569] systemd-fstab-generator[3648]: Ignoring "noauto" for root device
	[  +1.540661] systemd-fstab-generator[3944]: Ignoring "noauto" for root device
	[ +17.298108] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 17:39] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.608029] systemd-fstab-generator[12000]: Ignoring "noauto" for root device
	[  +5.633769] systemd-fstab-generator[12595]: Ignoring "noauto" for root device
	[  +0.469423] systemd-fstab-generator[12727]: Ignoring "noauto" for root device
	
	
	==> etcd [269687809c54] <==
	{"level":"info","ts":"2024-07-29T17:39:25.768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-29T17:39:25.768Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T17:39:25.772Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T17:39:25.772Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T17:39:25.772Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T17:39:25.772Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T17:39:25.772Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T17:39:26.014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T17:39:26.014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T17:39:26.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T17:39:26.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T17:39:26.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T17:39:26.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T17:39:26.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T17:39:26.015Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:39:26.015Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:39:26.016Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:39:26.016Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:39:26.016Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-466000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T17:39:26.016Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:39:26.016Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T17:39:26.016Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T17:39:26.016Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:39:26.017Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T17:39:26.024Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 17:43:50 up 9 min,  0 users,  load average: 0.23, 0.38, 0.22
	Linux running-upgrade-466000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [7b7891e29a8d] <==
	I0729 17:39:27.505472       1 controller.go:611] quota admission added evaluator for: namespaces
	I0729 17:39:27.542632       1 cache.go:39] Caches are synced for autoregister controller
	I0729 17:39:27.542747       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 17:39:27.542809       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 17:39:27.542906       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 17:39:27.542935       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 17:39:27.553516       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 17:39:28.275535       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 17:39:28.447115       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 17:39:28.448554       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 17:39:28.448601       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 17:39:28.566190       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 17:39:28.575126       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 17:39:28.609834       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 17:39:28.612312       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 17:39:28.612744       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 17:39:28.614005       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 17:39:29.583785       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 17:39:29.925522       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 17:39:29.928986       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 17:39:29.936739       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 17:39:29.977669       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 17:39:43.238645       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 17:39:43.287538       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 17:39:43.799092       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [378937110bea] <==
	I0729 17:39:42.594628       1 event.go:294] "Event occurred" object="running-upgrade-466000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-466000 event: Registered Node running-upgrade-466000 in Controller"
	I0729 17:39:42.596314       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 17:39:42.600393       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 17:39:42.603205       1 shared_informer.go:262] Caches are synced for disruption
	I0729 17:39:42.603314       1 disruption.go:371] Sending events to api server.
	I0729 17:39:42.605529       1 shared_informer.go:262] Caches are synced for endpoint
	I0729 17:39:42.607848       1 shared_informer.go:262] Caches are synced for deployment
	I0729 17:39:42.611107       1 shared_informer.go:262] Caches are synced for HPA
	I0729 17:39:42.633896       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 17:39:42.633902       1 shared_informer.go:262] Caches are synced for ephemeral
	I0729 17:39:42.633910       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0729 17:39:42.636367       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 17:39:42.636386       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 17:39:42.639173       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 17:39:42.683491       1 shared_informer.go:262] Caches are synced for job
	I0729 17:39:42.683574       1 shared_informer.go:262] Caches are synced for GC
	I0729 17:39:42.684774       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0729 17:39:42.687023       1 shared_informer.go:262] Caches are synced for PVC protection
	I0729 17:39:43.053818       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 17:39:43.086039       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 17:39:43.086049       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 17:39:43.240940       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 17:39:43.290284       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jlzr8"
	I0729 17:39:43.439040       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-js9lh"
	I0729 17:39:43.442104       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-p8b27"
	
	
	==> kube-proxy [80c49af6ca89] <==
	I0729 17:39:43.788553       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 17:39:43.788577       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 17:39:43.788587       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 17:39:43.797270       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 17:39:43.797282       1 server_others.go:206] "Using iptables Proxier"
	I0729 17:39:43.797294       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 17:39:43.797405       1 server.go:661] "Version info" version="v1.24.1"
	I0729 17:39:43.797413       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:39:43.797666       1 config.go:317] "Starting service config controller"
	I0729 17:39:43.797702       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 17:39:43.797722       1 config.go:226] "Starting endpoint slice config controller"
	I0729 17:39:43.797727       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 17:39:43.798010       1 config.go:444] "Starting node config controller"
	I0729 17:39:43.798033       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 17:39:43.898417       1 shared_informer.go:262] Caches are synced for node config
	I0729 17:39:43.898420       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 17:39:43.898432       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [b1a6466f958e] <==
	W0729 17:39:27.503000       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:39:27.503323       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:39:27.503025       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 17:39:27.503370       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 17:39:27.503042       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:39:27.503399       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:39:27.503056       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 17:39:27.503439       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 17:39:27.503071       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:39:27.503476       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 17:39:27.503156       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:39:27.503517       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:39:27.503185       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:39:27.503552       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 17:39:27.503197       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 17:39:27.503603       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 17:39:28.363637       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 17:39:28.363720       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 17:39:28.462220       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 17:39:28.462271       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 17:39:28.486767       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 17:39:28.486863       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 17:39:28.524956       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 17:39:28.524971       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0729 17:39:28.898180       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 17:34:24 UTC, ends at Mon 2024-07-29 17:43:51 UTC. --
	Jul 29 17:39:32 running-upgrade-466000 kubelet[12601]: E0729 17:39:32.159158   12601 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-466000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-466000"
	Jul 29 17:39:42 running-upgrade-466000 kubelet[12601]: I0729 17:39:42.458035   12601 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 17:39:42 running-upgrade-466000 kubelet[12601]: I0729 17:39:42.458380   12601 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 17:39:42 running-upgrade-466000 kubelet[12601]: I0729 17:39:42.600140   12601 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 17:39:42 running-upgrade-466000 kubelet[12601]: I0729 17:39:42.759159   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/db925472-8206-40c7-ad35-7aacade15f09-tmp\") pod \"storage-provisioner\" (UID: \"db925472-8206-40c7-ad35-7aacade15f09\") " pod="kube-system/storage-provisioner"
	Jul 29 17:39:42 running-upgrade-466000 kubelet[12601]: I0729 17:39:42.759276   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7snq\" (UniqueName: \"kubernetes.io/projected/db925472-8206-40c7-ad35-7aacade15f09-kube-api-access-f7snq\") pod \"storage-provisioner\" (UID: \"db925472-8206-40c7-ad35-7aacade15f09\") " pod="kube-system/storage-provisioner"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.293799   12601 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.441562   12601 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: W0729 17:39:43.444296   12601 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: E0729 17:39:43.444331   12601 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-466000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-466000' and this object
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.446999   12601 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.466813   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b047d6b-833c-43cc-96aa-c1df746974bd-xtables-lock\") pod \"kube-proxy-jlzr8\" (UID: \"8b047d6b-833c-43cc-96aa-c1df746974bd\") " pod="kube-system/kube-proxy-jlzr8"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.467012   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b047d6b-833c-43cc-96aa-c1df746974bd-lib-modules\") pod \"kube-proxy-jlzr8\" (UID: \"8b047d6b-833c-43cc-96aa-c1df746974bd\") " pod="kube-system/kube-proxy-jlzr8"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.467039   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mz24\" (UniqueName: \"kubernetes.io/projected/8b047d6b-833c-43cc-96aa-c1df746974bd-kube-api-access-6mz24\") pod \"kube-proxy-jlzr8\" (UID: \"8b047d6b-833c-43cc-96aa-c1df746974bd\") " pod="kube-system/kube-proxy-jlzr8"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.467062   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b047d6b-833c-43cc-96aa-c1df746974bd-kube-proxy\") pod \"kube-proxy-jlzr8\" (UID: \"8b047d6b-833c-43cc-96aa-c1df746974bd\") " pod="kube-system/kube-proxy-jlzr8"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.569093   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f6f8cd3-cc15-4348-9556-16688a41a206-config-volume\") pod \"coredns-6d4b75cb6d-js9lh\" (UID: \"7f6f8cd3-cc15-4348-9556-16688a41a206\") " pod="kube-system/coredns-6d4b75cb6d-js9lh"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.569141   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpb66\" (UniqueName: \"kubernetes.io/projected/56d73b8f-07cd-4414-a51b-48e1af64375e-kube-api-access-bpb66\") pod \"coredns-6d4b75cb6d-p8b27\" (UID: \"56d73b8f-07cd-4414-a51b-48e1af64375e\") " pod="kube-system/coredns-6d4b75cb6d-p8b27"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.569153   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxf48\" (UniqueName: \"kubernetes.io/projected/7f6f8cd3-cc15-4348-9556-16688a41a206-kube-api-access-jxf48\") pod \"coredns-6d4b75cb6d-js9lh\" (UID: \"7f6f8cd3-cc15-4348-9556-16688a41a206\") " pod="kube-system/coredns-6d4b75cb6d-js9lh"
	Jul 29 17:39:43 running-upgrade-466000 kubelet[12601]: I0729 17:39:43.569166   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56d73b8f-07cd-4414-a51b-48e1af64375e-config-volume\") pod \"coredns-6d4b75cb6d-p8b27\" (UID: \"56d73b8f-07cd-4414-a51b-48e1af64375e\") " pod="kube-system/coredns-6d4b75cb6d-p8b27"
	Jul 29 17:39:44 running-upgrade-466000 kubelet[12601]: E0729 17:39:44.669368   12601 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jul 29 17:39:44 running-upgrade-466000 kubelet[12601]: E0729 17:39:44.669368   12601 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jul 29 17:39:44 running-upgrade-466000 kubelet[12601]: E0729 17:39:44.669471   12601 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/56d73b8f-07cd-4414-a51b-48e1af64375e-config-volume podName:56d73b8f-07cd-4414-a51b-48e1af64375e nodeName:}" failed. No retries permitted until 2024-07-29 17:39:45.169430765 +0000 UTC m=+15.254738471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/56d73b8f-07cd-4414-a51b-48e1af64375e-config-volume") pod "coredns-6d4b75cb6d-p8b27" (UID: "56d73b8f-07cd-4414-a51b-48e1af64375e") : failed to sync configmap cache: timed out waiting for the condition
	Jul 29 17:39:44 running-upgrade-466000 kubelet[12601]: E0729 17:39:44.669479   12601 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7f6f8cd3-cc15-4348-9556-16688a41a206-config-volume podName:7f6f8cd3-cc15-4348-9556-16688a41a206 nodeName:}" failed. No retries permitted until 2024-07-29 17:39:45.169475099 +0000 UTC m=+15.254782804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7f6f8cd3-cc15-4348-9556-16688a41a206-config-volume") pod "coredns-6d4b75cb6d-js9lh" (UID: "7f6f8cd3-cc15-4348-9556-16688a41a206") : failed to sync configmap cache: timed out waiting for the condition
	Jul 29 17:43:32 running-upgrade-466000 kubelet[12601]: I0729 17:43:32.560442   12601 scope.go:110] "RemoveContainer" containerID="b67afba30dbd7f2acb80eb7398a707f23d894152fd711179403de7a9ab0d87d2"
	Jul 29 17:43:32 running-upgrade-466000 kubelet[12601]: I0729 17:43:32.577716   12601 scope.go:110] "RemoveContainer" containerID="d43e4d4e905e28bd3f0a7f559e5a007fe40f88f7e52ea983afcc2268a64151d7"
	
	
	==> storage-provisioner [f6528dc8e174] <==
	I0729 17:39:43.129588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 17:39:43.134023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 17:39:43.134041       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 17:39:43.138622       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 17:39:43.138671       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-466000_7bbcf8d1-cef2-4b5a-bf18-61fa86ec3ef2!
	I0729 17:39:43.139652       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09b7122d-4f90-4ca0-9f05-880e1a171740", APIVersion:"v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-466000_7bbcf8d1-cef2-4b5a-bf18-61fa86ec3ef2 became leader
	I0729 17:39:43.239487       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-466000_7bbcf8d1-cef2-4b5a-bf18-61fa86ec3ef2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-466000 -n running-upgrade-466000
E0729 10:44:06.580743    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-466000 -n running-upgrade-466000: exit status 2 (15.664492s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-466000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-466000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-466000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-466000: (1.3043975s)
--- FAIL: TestRunningBinaryUpgrade (627.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-436000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-436000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.767449042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-436000" primary control-plane node in "kubernetes-upgrade-436000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-436000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:36:40.818171    4578 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:36:40.818297    4578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:40.818301    4578 out.go:304] Setting ErrFile to fd 2...
	I0729 10:36:40.818303    4578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:40.818453    4578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:36:40.819508    4578 out.go:298] Setting JSON to false
	I0729 10:36:40.835578    4578 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3964,"bootTime":1722270636,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:36:40.835642    4578 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:36:40.840763    4578 out.go:177] * [kubernetes-upgrade-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:36:40.848623    4578 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:36:40.848681    4578 notify.go:220] Checking for updates...
	I0729 10:36:40.854597    4578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:36:40.857573    4578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:36:40.858952    4578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:36:40.861568    4578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:36:40.864614    4578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:36:40.867957    4578 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:36:40.868026    4578 config.go:182] Loaded profile config "running-upgrade-466000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:36:40.868073    4578 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:36:40.872522    4578 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:36:40.879543    4578 start.go:297] selected driver: qemu2
	I0729 10:36:40.879553    4578 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:36:40.879561    4578 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:36:40.881834    4578 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:36:40.884575    4578 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:36:40.887648    4578 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:36:40.887664    4578 cni.go:84] Creating CNI manager for ""
	I0729 10:36:40.887672    4578 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 10:36:40.887708    4578 start.go:340] cluster config:
	{Name:kubernetes-upgrade-436000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:40.891264    4578 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:36:40.898622    4578 out.go:177] * Starting "kubernetes-upgrade-436000" primary control-plane node in "kubernetes-upgrade-436000" cluster
	I0729 10:36:40.902497    4578 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:36:40.902512    4578 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 10:36:40.902522    4578 cache.go:56] Caching tarball of preloaded images
	I0729 10:36:40.902576    4578 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:36:40.902582    4578 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 10:36:40.902633    4578 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/kubernetes-upgrade-436000/config.json ...
	I0729 10:36:40.902643    4578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/kubernetes-upgrade-436000/config.json: {Name:mk3c9ec00927809afef031076acd41ee5f34d98f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:36:40.902986    4578 start.go:360] acquireMachinesLock for kubernetes-upgrade-436000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:36:40.903021    4578 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "kubernetes-upgrade-436000"
	I0729 10:36:40.903032    4578 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:36:40.903057    4578 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:36:40.911546    4578 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:36:40.926825    4578 start.go:159] libmachine.API.Create for "kubernetes-upgrade-436000" (driver="qemu2")
	I0729 10:36:40.926847    4578 client.go:168] LocalClient.Create starting
	I0729 10:36:40.926919    4578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:36:40.926953    4578 main.go:141] libmachine: Decoding PEM data...
	I0729 10:36:40.926960    4578 main.go:141] libmachine: Parsing certificate...
	I0729 10:36:40.926997    4578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:36:40.927019    4578 main.go:141] libmachine: Decoding PEM data...
	I0729 10:36:40.927031    4578 main.go:141] libmachine: Parsing certificate...
	I0729 10:36:40.927476    4578 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:36:41.080527    4578 main.go:141] libmachine: Creating SSH key...
	I0729 10:36:41.154220    4578 main.go:141] libmachine: Creating Disk image...
	I0729 10:36:41.154226    4578 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:36:41.154415    4578 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2
	I0729 10:36:41.163898    4578 main.go:141] libmachine: STDOUT: 
	I0729 10:36:41.163913    4578 main.go:141] libmachine: STDERR: 
	I0729 10:36:41.163969    4578 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2 +20000M
	I0729 10:36:41.172264    4578 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:36:41.172280    4578 main.go:141] libmachine: STDERR: 
	I0729 10:36:41.172298    4578 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2
	I0729 10:36:41.172303    4578 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:36:41.172318    4578 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:36:41.172347    4578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:2a:53:b9:0a:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2
	I0729 10:36:41.173953    4578 main.go:141] libmachine: STDOUT: 
	I0729 10:36:41.173968    4578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:36:41.173987    4578 client.go:171] duration metric: took 247.147292ms to LocalClient.Create
	I0729 10:36:43.176082    4578 start.go:128] duration metric: took 2.27311775s to createHost
	I0729 10:36:43.176116    4578 start.go:83] releasing machines lock for "kubernetes-upgrade-436000", held for 2.273196541s
	W0729 10:36:43.176154    4578 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:43.185177    4578 out.go:177] * Deleting "kubernetes-upgrade-436000" in qemu2 ...
	W0729 10:36:43.209377    4578 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:43.209389    4578 start.go:729] Will try again in 5 seconds ...
	I0729 10:36:48.211339    4578 start.go:360] acquireMachinesLock for kubernetes-upgrade-436000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:36:48.211850    4578 start.go:364] duration metric: took 390.333µs to acquireMachinesLock for "kubernetes-upgrade-436000"
	I0729 10:36:48.211973    4578 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:36:48.212239    4578 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:36:48.219006    4578 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:36:48.266897    4578 start.go:159] libmachine.API.Create for "kubernetes-upgrade-436000" (driver="qemu2")
	I0729 10:36:48.266959    4578 client.go:168] LocalClient.Create starting
	I0729 10:36:48.267081    4578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:36:48.267152    4578 main.go:141] libmachine: Decoding PEM data...
	I0729 10:36:48.267170    4578 main.go:141] libmachine: Parsing certificate...
	I0729 10:36:48.267237    4578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:36:48.267282    4578 main.go:141] libmachine: Decoding PEM data...
	I0729 10:36:48.267296    4578 main.go:141] libmachine: Parsing certificate...
	I0729 10:36:48.267895    4578 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:36:48.433498    4578 main.go:141] libmachine: Creating SSH key...
	I0729 10:36:48.491342    4578 main.go:141] libmachine: Creating Disk image...
	I0729 10:36:48.491349    4578 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:36:48.491547    4578 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2
	I0729 10:36:48.500890    4578 main.go:141] libmachine: STDOUT: 
	I0729 10:36:48.500910    4578 main.go:141] libmachine: STDERR: 
	I0729 10:36:48.500982    4578 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2 +20000M
	I0729 10:36:48.508926    4578 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:36:48.508943    4578 main.go:141] libmachine: STDERR: 
	I0729 10:36:48.508957    4578 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2
	I0729 10:36:48.508961    4578 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:36:48.508971    4578 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:36:48.509002    4578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3c:dc:94:fa:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2
	I0729 10:36:48.510629    4578 main.go:141] libmachine: STDOUT: 
	I0729 10:36:48.510654    4578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:36:48.510665    4578 client.go:171] duration metric: took 243.71175ms to LocalClient.Create
	I0729 10:36:50.512773    4578 start.go:128] duration metric: took 2.300566791s to createHost
	I0729 10:36:50.512847    4578 start.go:83] releasing machines lock for "kubernetes-upgrade-436000", held for 2.30108325s
	W0729 10:36:50.513249    4578 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-436000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-436000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:50.523870    4578 out.go:177] 
	W0729 10:36:50.532071    4578 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:36:50.532097    4578 out.go:239] * 
	* 
	W0729 10:36:50.534636    4578 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:36:50.543886    4578 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-436000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-436000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-436000: (2.13394825s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-436000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-436000 status --format={{.Host}}: exit status 7 (56.775708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-436000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-436000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.189175917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-436000" primary control-plane node in "kubernetes-upgrade-436000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-436000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-436000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:36:52.780718    4621 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:36:52.780897    4621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:52.780901    4621 out.go:304] Setting ErrFile to fd 2...
	I0729 10:36:52.780904    4621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:52.781034    4621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:36:52.782023    4621 out.go:298] Setting JSON to false
	I0729 10:36:52.798973    4621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3976,"bootTime":1722270636,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:36:52.799059    4621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:36:52.803623    4621 out.go:177] * [kubernetes-upgrade-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:36:52.811580    4621 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:36:52.811648    4621 notify.go:220] Checking for updates...
	I0729 10:36:52.817596    4621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:36:52.820517    4621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:36:52.823564    4621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:36:52.826484    4621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:36:52.829531    4621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:36:52.832848    4621 config.go:182] Loaded profile config "kubernetes-upgrade-436000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 10:36:52.833108    4621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:36:52.837515    4621 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:36:52.844523    4621 start.go:297] selected driver: qemu2
	I0729 10:36:52.844531    4621 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:52.844587    4621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:36:52.847040    4621 cni.go:84] Creating CNI manager for ""
	I0729 10:36:52.847057    4621 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:36:52.847077    4621 start.go:340] cluster config:
	{Name:kubernetes-upgrade-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-436000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:52.850780    4621 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:36:52.858511    4621 out.go:177] * Starting "kubernetes-upgrade-436000" primary control-plane node in "kubernetes-upgrade-436000" cluster
	I0729 10:36:52.862405    4621 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:36:52.862419    4621 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 10:36:52.862430    4621 cache.go:56] Caching tarball of preloaded images
	I0729 10:36:52.862484    4621 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:36:52.862489    4621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 10:36:52.862534    4621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/kubernetes-upgrade-436000/config.json ...
	I0729 10:36:52.862952    4621 start.go:360] acquireMachinesLock for kubernetes-upgrade-436000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:36:52.862979    4621 start.go:364] duration metric: took 21.291µs to acquireMachinesLock for "kubernetes-upgrade-436000"
	I0729 10:36:52.862989    4621 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:36:52.862997    4621 fix.go:54] fixHost starting: 
	I0729 10:36:52.863112    4621 fix.go:112] recreateIfNeeded on kubernetes-upgrade-436000: state=Stopped err=<nil>
	W0729 10:36:52.863120    4621 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:36:52.871475    4621 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-436000" ...
	I0729 10:36:52.875461    4621 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:36:52.875496    4621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3c:dc:94:fa:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2
	I0729 10:36:52.877390    4621 main.go:141] libmachine: STDOUT: 
	I0729 10:36:52.877406    4621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:36:52.877440    4621 fix.go:56] duration metric: took 14.442792ms for fixHost
	I0729 10:36:52.877444    4621 start.go:83] releasing machines lock for "kubernetes-upgrade-436000", held for 14.461375ms
	W0729 10:36:52.877450    4621 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:36:52.877482    4621 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:52.877486    4621 start.go:729] Will try again in 5 seconds ...
	I0729 10:36:57.879529    4621 start.go:360] acquireMachinesLock for kubernetes-upgrade-436000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:36:57.880081    4621 start.go:364] duration metric: took 437.417µs to acquireMachinesLock for "kubernetes-upgrade-436000"
	I0729 10:36:57.880253    4621 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:36:57.880274    4621 fix.go:54] fixHost starting: 
	I0729 10:36:57.881019    4621 fix.go:112] recreateIfNeeded on kubernetes-upgrade-436000: state=Stopped err=<nil>
	W0729 10:36:57.881045    4621 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:36:57.891296    4621 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-436000" ...
	I0729 10:36:57.895450    4621 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:36:57.895727    4621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3c:dc:94:fa:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubernetes-upgrade-436000/disk.qcow2
	I0729 10:36:57.905931    4621 main.go:141] libmachine: STDOUT: 
	I0729 10:36:57.906008    4621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:36:57.906075    4621 fix.go:56] duration metric: took 25.805292ms for fixHost
	I0729 10:36:57.906094    4621 start.go:83] releasing machines lock for "kubernetes-upgrade-436000", held for 25.990209ms
	W0729 10:36:57.906256    4621 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-436000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-436000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:57.913372    4621 out.go:177] 
	W0729 10:36:57.917558    4621 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:36:57.917578    4621 out.go:239] * 
	* 
	W0729 10:36:57.919461    4621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:36:57.928478    4621 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-436000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-436000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-436000 version --output=json: exit status 1 (55.0405ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-436000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 10:36:57.997094 -0700 PDT m=+2514.834454960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-436000 -n kubernetes-upgrade-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-436000 -n kubernetes-upgrade-436000: exit status 7 (31.574042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-436000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-436000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-436000
--- FAIL: TestKubernetesUpgrade (17.32s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.76s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19345
- KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current524996233/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.76s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.33s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19345
- KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4261776190/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (570.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3479015849 start -p stopped-upgrade-396000 --memory=2200 --vm-driver=qemu2 
E0729 10:37:20.403984    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3479015849 start -p stopped-upgrade-396000 --memory=2200 --vm-driver=qemu2 : (37.430919625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3479015849 -p stopped-upgrade-396000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3479015849 -p stopped-upgrade-396000 stop: (12.109970625s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-396000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0729 10:39:17.327392    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:41:03.521429    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-396000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.704388584s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-396000" primary control-plane node in "stopped-upgrade-396000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-396000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:37:48.639626    4671 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:37:48.639799    4671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:48.639803    4671 out.go:304] Setting ErrFile to fd 2...
	I0729 10:37:48.639806    4671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:48.639965    4671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:37:48.641213    4671 out.go:298] Setting JSON to false
	I0729 10:37:48.661172    4671 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4032,"bootTime":1722270636,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:37:48.661242    4671 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:37:48.665842    4671 out.go:177] * [stopped-upgrade-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:37:48.673797    4671 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:37:48.673849    4671 notify.go:220] Checking for updates...
	I0729 10:37:48.680790    4671 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:37:48.683802    4671 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:37:48.689764    4671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:37:48.693777    4671 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:37:48.696834    4671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:37:48.701015    4671 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:37:48.704790    4671 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 10:37:48.707804    4671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:37:48.711691    4671 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:37:48.718799    4671 start.go:297] selected driver: qemu2
	I0729 10:37:48.718806    4671 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:37:48.718849    4671 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:37:48.721450    4671 cni.go:84] Creating CNI manager for ""
	I0729 10:37:48.721515    4671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:37:48.721553    4671 start.go:340] cluster config:
	{Name:stopped-upgrade-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:37:48.721604    4671 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:37:48.729759    4671 out.go:177] * Starting "stopped-upgrade-396000" primary control-plane node in "stopped-upgrade-396000" cluster
	I0729 10:37:48.733688    4671 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:37:48.733704    4671 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 10:37:48.733715    4671 cache.go:56] Caching tarball of preloaded images
	I0729 10:37:48.733777    4671 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:37:48.733783    4671 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 10:37:48.733844    4671 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/config.json ...
	I0729 10:37:48.734274    4671 start.go:360] acquireMachinesLock for stopped-upgrade-396000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:37:48.734300    4671 start.go:364] duration metric: took 20.833µs to acquireMachinesLock for "stopped-upgrade-396000"
	I0729 10:37:48.734309    4671 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:37:48.734313    4671 fix.go:54] fixHost starting: 
	I0729 10:37:48.734417    4671 fix.go:112] recreateIfNeeded on stopped-upgrade-396000: state=Stopped err=<nil>
	W0729 10:37:48.734425    4671 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:37:48.742810    4671 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-396000" ...
	I0729 10:37:48.746776    4671 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:37:48.746836    4671 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50491-:22,hostfwd=tcp::50492-:2376,hostname=stopped-upgrade-396000 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/disk.qcow2
	I0729 10:37:48.794668    4671 main.go:141] libmachine: STDOUT: 
	I0729 10:37:48.794694    4671 main.go:141] libmachine: STDERR: 
	I0729 10:37:48.794700    4671 main.go:141] libmachine: Waiting for VM to start (ssh -p 50491 docker@127.0.0.1)...
	I0729 10:38:08.299537    4671 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/config.json ...
	I0729 10:38:08.300200    4671 machine.go:94] provisionDockerMachine start ...
	I0729 10:38:08.300378    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.300869    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.300883    4671 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:38:08.378824    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 10:38:08.378868    4671 buildroot.go:166] provisioning hostname "stopped-upgrade-396000"
	I0729 10:38:08.378986    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.379280    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.379295    4671 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-396000 && echo "stopped-upgrade-396000" | sudo tee /etc/hostname
	I0729 10:38:08.448739    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-396000
	
	I0729 10:38:08.448852    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.449036    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.449048    4671 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-396000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-396000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-396000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:38:08.504067    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:38:08.504080    4671 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19345-1151/.minikube CaCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19345-1151/.minikube}
	I0729 10:38:08.504090    4671 buildroot.go:174] setting up certificates
	I0729 10:38:08.504094    4671 provision.go:84] configureAuth start
	I0729 10:38:08.504103    4671 provision.go:143] copyHostCerts
	I0729 10:38:08.504175    4671 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem, removing ...
	I0729 10:38:08.504184    4671 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem
	I0729 10:38:08.504302    4671 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.pem (1082 bytes)
	I0729 10:38:08.504492    4671 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem, removing ...
	I0729 10:38:08.504497    4671 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem
	I0729 10:38:08.504552    4671 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/cert.pem (1123 bytes)
	I0729 10:38:08.504665    4671 exec_runner.go:144] found /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem, removing ...
	I0729 10:38:08.504669    4671 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem
	I0729 10:38:08.504731    4671 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19345-1151/.minikube/key.pem (1675 bytes)
	I0729 10:38:08.504831    4671 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-396000 san=[127.0.0.1 localhost minikube stopped-upgrade-396000]
	I0729 10:38:08.608985    4671 provision.go:177] copyRemoteCerts
	I0729 10:38:08.609015    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:38:08.609023    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:38:08.638485    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:38:08.645321    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:38:08.651691    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 10:38:08.659435    4671 provision.go:87] duration metric: took 155.341ms to configureAuth
	I0729 10:38:08.659449    4671 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:38:08.659587    4671 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:38:08.659626    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.659724    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.659732    4671 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 10:38:08.712531    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 10:38:08.712544    4671 buildroot.go:70] root file system type: tmpfs
	I0729 10:38:08.712597    4671 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 10:38:08.712649    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.712769    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.712803    4671 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 10:38:08.766942    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 10:38:08.766988    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:08.767097    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:08.767111    4671 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 10:38:09.126686    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 10:38:09.126699    4671 machine.go:97] duration metric: took 826.528875ms to provisionDockerMachine
	I0729 10:38:09.126706    4671 start.go:293] postStartSetup for "stopped-upgrade-396000" (driver="qemu2")
	I0729 10:38:09.126713    4671 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:38:09.126782    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:38:09.126793    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:38:09.159039    4671 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:38:09.160391    4671 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 10:38:09.160399    4671 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19345-1151/.minikube/addons for local assets ...
	I0729 10:38:09.160477    4671 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19345-1151/.minikube/files for local assets ...
	I0729 10:38:09.160571    4671 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem -> 16482.pem in /etc/ssl/certs
	I0729 10:38:09.160663    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:38:09.163506    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem --> /etc/ssl/certs/16482.pem (1708 bytes)
	I0729 10:38:09.170528    4671 start.go:296] duration metric: took 43.818291ms for postStartSetup
	I0729 10:38:09.170541    4671 fix.go:56] duration metric: took 20.437199541s for fixHost
	I0729 10:38:09.170574    4671 main.go:141] libmachine: Using SSH client type: native
	I0729 10:38:09.170705    4671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103162a10] 0x103165270 <nil>  [] 0s} localhost 50491 <nil> <nil>}
	I0729 10:38:09.170709    4671 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 10:38:09.220534    4671 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722274689.681266712
	
	I0729 10:38:09.220540    4671 fix.go:216] guest clock: 1722274689.681266712
	I0729 10:38:09.220544    4671 fix.go:229] Guest: 2024-07-29 10:38:09.681266712 -0700 PDT Remote: 2024-07-29 10:38:09.170543 -0700 PDT m=+20.564879251 (delta=510.723712ms)
	I0729 10:38:09.220554    4671 fix.go:200] guest clock delta is within tolerance: 510.723712ms
	I0729 10:38:09.220556    4671 start.go:83] releasing machines lock for "stopped-upgrade-396000", held for 20.487226792s
	I0729 10:38:09.220608    4671 ssh_runner.go:195] Run: cat /version.json
	I0729 10:38:09.220610    4671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:38:09.220618    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:38:09.220630    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	W0729 10:38:09.351098    4671 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 10:38:09.351166    4671 ssh_runner.go:195] Run: systemctl --version
	I0729 10:38:09.353385    4671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:38:09.355262    4671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:38:09.355297    4671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 10:38:09.358720    4671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 10:38:09.366772    4671 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:38:09.366788    4671 start.go:495] detecting cgroup driver to use...
	I0729 10:38:09.366865    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:38:09.374110    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 10:38:09.378026    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 10:38:09.384437    4671 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 10:38:09.384495    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 10:38:09.387899    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:38:09.391131    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 10:38:09.394243    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:38:09.397287    4671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:38:09.400102    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 10:38:09.403510    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 10:38:09.406818    4671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 10:38:09.409644    4671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:38:09.412191    4671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:38:09.415027    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:09.496781    4671 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 10:38:09.507124    4671 start.go:495] detecting cgroup driver to use...
	I0729 10:38:09.507196    4671 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 10:38:09.512771    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:38:09.520040    4671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:38:09.535922    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:38:09.541035    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:38:09.545612    4671 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 10:38:09.573983    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:38:09.578431    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:38:09.583599    4671 ssh_runner.go:195] Run: which cri-dockerd
	I0729 10:38:09.585024    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 10:38:09.587842    4671 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 10:38:09.592916    4671 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 10:38:09.682117    4671 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 10:38:09.757540    4671 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 10:38:09.757622    4671 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 10:38:09.762823    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:09.843360    4671 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:38:11.021717    4671 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.178395s)
	I0729 10:38:11.021773    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 10:38:11.026456    4671 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 10:38:11.033187    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:38:11.038300    4671 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 10:38:11.124290    4671 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 10:38:11.208424    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:11.292415    4671 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 10:38:11.299014    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:38:11.304603    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:11.389340    4671 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 10:38:11.428773    4671 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 10:38:11.428852    4671 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 10:38:11.430947    4671 start.go:563] Will wait 60s for crictl version
	I0729 10:38:11.430999    4671 ssh_runner.go:195] Run: which crictl
	I0729 10:38:11.432420    4671 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:38:11.446566    4671 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 10:38:11.446636    4671 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:38:11.462936    4671 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:38:11.487380    4671 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 10:38:11.487498    4671 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 10:38:11.488845    4671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:38:11.492323    4671 kubeadm.go:883] updating cluster {Name:stopped-upgrade-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 10:38:11.492367    4671 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:38:11.492413    4671 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:38:11.503214    4671 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:38:11.503230    4671 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:38:11.503275    4671 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:38:11.506601    4671 ssh_runner.go:195] Run: which lz4
	I0729 10:38:11.507959    4671 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 10:38:11.509368    4671 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:38:11.509380    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 10:38:12.416255    4671 docker.go:649] duration metric: took 908.368834ms to copy over tarball
	I0729 10:38:12.416312    4671 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:38:13.568858    4671 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152587667s)
	I0729 10:38:13.568872    4671 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:38:13.584484    4671 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:38:13.587662    4671 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 10:38:13.592833    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:13.675305    4671 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:38:15.172156    4671 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.496905125s)
	I0729 10:38:15.172255    4671 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:38:15.184940    4671 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:38:15.184956    4671 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:38:15.184962    4671 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 10:38:15.189387    4671 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:15.191047    4671 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.193038    4671 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.193058    4671 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:15.194101    4671 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.194958    4671 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.196373    4671 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.196479    4671 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.197118    4671 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.197957    4671 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.198439    4671 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 10:38:15.199192    4671 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.199838    4671 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.199997    4671 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.200763    4671 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 10:38:15.201310    4671 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.571457    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.583917    4671 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 10:38:15.583949    4671 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.584001    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:38:15.594370    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 10:38:15.610244    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.620391    4671 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 10:38:15.620414    4671 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.620464    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:38:15.622745    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.632208    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 10:38:15.639611    4671 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 10:38:15.639627    4671 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.639678    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:38:15.641579    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.652844    4671 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 10:38:15.652865    4671 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.652918    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:38:15.653032    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 10:38:15.657790    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 10:38:15.663896    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 10:38:15.672321    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.673595    4671 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 10:38:15.673612    4671 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 10:38:15.673645    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 10:38:15.684650    4671 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 10:38:15.684664    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 10:38:15.684668    4671 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.684708    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 10:38:15.684773    4671 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 10:38:15.687284    4671 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 10:38:15.687303    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 10:38:15.696568    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 10:38:15.696683    4671 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:38:15.698382    4671 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 10:38:15.698403    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 10:38:15.707147    4671 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 10:38:15.707158    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0729 10:38:15.721579    4671 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 10:38:15.721712    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.773009    4671 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 10:38:15.773069    4671 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 10:38:15.773091    4671 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.773160    4671 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:38:15.807243    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 10:38:15.807384    4671 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:38:15.820760    4671 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 10:38:15.820793    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 10:38:15.907673    4671 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:38:15.907689    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 10:38:15.981613    4671 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 10:38:16.019693    4671 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:38:16.019708    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0729 10:38:16.035310    4671 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 10:38:16.035427    4671 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:16.156972    4671 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 10:38:16.157016    4671 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 10:38:16.157036    4671 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:16.157091    4671 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:38:16.173408    4671 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 10:38:16.173541    4671 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:38:16.174984    4671 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 10:38:16.174999    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 10:38:16.205172    4671 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:38:16.205188    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 10:38:16.434269    4671 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 10:38:16.434307    4671 cache_images.go:92] duration metric: took 1.249390875s to LoadCachedImages
	W0729 10:38:16.434354    4671 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 10:38:16.434361    4671 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 10:38:16.434419    4671 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-396000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:38:16.434481    4671 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 10:38:16.453691    4671 cni.go:84] Creating CNI manager for ""
	I0729 10:38:16.453702    4671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:38:16.453707    4671 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:38:16.453715    4671 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-396000 NodeName:stopped-upgrade-396000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:38:16.453780    4671 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-396000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:38:16.453841    4671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 10:38:16.456624    4671 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:38:16.456658    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:38:16.459496    4671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 10:38:16.464372    4671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:38:16.469170    4671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 10:38:16.474471    4671 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 10:38:16.475654    4671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:38:16.479521    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:38:16.558319    4671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:38:16.563461    4671 certs.go:68] Setting up /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000 for IP: 10.0.2.15
	I0729 10:38:16.563470    4671 certs.go:194] generating shared ca certs ...
	I0729 10:38:16.563479    4671 certs.go:226] acquiring lock for ca certs: {Name:mk28bd7d778d1316d2729251af42b84d93001f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:16.563645    4671 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.key
	I0729 10:38:16.563689    4671 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.key
	I0729 10:38:16.563699    4671 certs.go:256] generating profile certs ...
	I0729 10:38:16.563762    4671 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.key
	I0729 10:38:16.563777    4671 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key.3a53ef7d
	I0729 10:38:16.563786    4671 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt.3a53ef7d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 10:38:16.697532    4671 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt.3a53ef7d ...
	I0729 10:38:16.697547    4671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt.3a53ef7d: {Name:mkf8c8827c3bf4e8c67713a9eecd11bc6940bf81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:16.699374    4671 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key.3a53ef7d ...
	I0729 10:38:16.699381    4671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key.3a53ef7d: {Name:mk081eaa9df64d2852d9436fbb1765eef30ee189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:16.699537    4671 certs.go:381] copying /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt.3a53ef7d -> /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt
	I0729 10:38:16.699874    4671 certs.go:385] copying /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key.3a53ef7d -> /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key
	I0729 10:38:16.700029    4671 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/proxy-client.key
	I0729 10:38:16.700168    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648.pem (1338 bytes)
	W0729 10:38:16.700190    4671 certs.go:480] ignoring /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648_empty.pem, impossibly tiny 0 bytes
	I0729 10:38:16.700196    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:38:16.700214    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:38:16.700232    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:38:16.700250    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/key.pem (1675 bytes)
	I0729 10:38:16.700287    4671 certs.go:484] found cert: /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem (1708 bytes)
	I0729 10:38:16.700644    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:38:16.707890    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 10:38:16.714920    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:38:16.722249    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:38:16.729094    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 10:38:16.735626    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 10:38:16.742868    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:38:16.750138    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:38:16.757103    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:38:16.763574    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/1648.pem --> /usr/share/ca-certificates/1648.pem (1338 bytes)
	I0729 10:38:16.770899    4671 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/ssl/certs/16482.pem --> /usr/share/ca-certificates/16482.pem (1708 bytes)
	I0729 10:38:16.777883    4671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:38:16.783169    4671 ssh_runner.go:195] Run: openssl version
	I0729 10:38:16.785203    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16482.pem && ln -fs /usr/share/ca-certificates/16482.pem /etc/ssl/certs/16482.pem"
	I0729 10:38:16.788320    4671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16482.pem
	I0729 10:38:16.789884    4671 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:03 /usr/share/ca-certificates/16482.pem
	I0729 10:38:16.789908    4671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16482.pem
	I0729 10:38:16.791732    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16482.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:38:16.794987    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:38:16.798266    4671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:38:16.799735    4671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:38:16.799760    4671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:38:16.801622    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:38:16.804520    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1648.pem && ln -fs /usr/share/ca-certificates/1648.pem /etc/ssl/certs/1648.pem"
	I0729 10:38:16.807670    4671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1648.pem
	I0729 10:38:16.809167    4671 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:03 /usr/share/ca-certificates/1648.pem
	I0729 10:38:16.809184    4671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1648.pem
	I0729 10:38:16.810906    4671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1648.pem /etc/ssl/certs/51391683.0"
	I0729 10:38:16.813820    4671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:38:16.815238    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 10:38:16.818126    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 10:38:16.819949    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 10:38:16.822336    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 10:38:16.824148    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 10:38:16.825934    4671 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 10:38:16.827743    4671 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:38:16.827811    4671 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:38:16.837453    4671 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:38:16.840544    4671 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 10:38:16.840550    4671 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 10:38:16.840570    4671 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 10:38:16.843813    4671 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:38:16.844102    4671 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-396000" does not appear in /Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:38:16.844208    4671 kubeconfig.go:62] /Users/jenkins/minikube-integration/19345-1151/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-396000" cluster setting kubeconfig missing "stopped-upgrade-396000" context setting]
	I0729 10:38:16.844410    4671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/kubeconfig: {Name:mk69e1ff39ac907f2664a3f00c50d678e5bdc356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:38:16.844825    4671 kapi.go:59] client config for stopped-upgrade-396000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.key", CAFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044f80c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:38:16.845156    4671 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 10:38:16.847823    4671 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-396000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 10:38:16.847829    4671 kubeadm.go:1160] stopping kube-system containers ...
	I0729 10:38:16.847870    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:38:16.858120    4671 docker.go:483] Stopping containers: [3a751030b0c1 06ee739538c0 911773d2a582 d85e93f01c88 8932880f9d0a 52cd23a2afc6 ca2e80c87719 9754f11c265c]
	I0729 10:38:16.858187    4671 ssh_runner.go:195] Run: docker stop 3a751030b0c1 06ee739538c0 911773d2a582 d85e93f01c88 8932880f9d0a 52cd23a2afc6 ca2e80c87719 9754f11c265c
	I0729 10:38:16.868520    4671 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 10:38:16.874350    4671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:38:16.876940    4671 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:38:16.876950    4671 kubeadm.go:157] found existing configuration files:
	
	I0729 10:38:16.876970    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf
	I0729 10:38:16.879611    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:38:16.879628    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:38:16.882614    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf
	I0729 10:38:16.885214    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:38:16.885233    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:38:16.887790    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf
	I0729 10:38:16.890516    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:38:16.890538    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:38:16.893097    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf
	I0729 10:38:16.895596    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:38:16.895617    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:38:16.898656    4671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:38:16.901554    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:16.923344    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:17.209238    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:17.343905    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:17.365835    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:38:17.390884    4671 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:38:17.390964    4671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:38:17.893059    4671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:38:18.393018    4671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:38:18.397176    4671 api_server.go:72] duration metric: took 1.006342083s to wait for apiserver process to appear ...
	I0729 10:38:18.397186    4671 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:38:18.397194    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:23.399212    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:23.399289    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:28.399764    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:28.399844    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:33.400503    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:33.400521    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:38.401034    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:38.401127    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:43.402170    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:43.402230    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:48.402877    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:48.402947    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:53.404717    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:53.404790    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:38:58.406418    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:38:58.406471    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:03.408662    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:03.408734    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:08.410948    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:08.410997    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:13.412558    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:13.412636    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:18.414873    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:18.415066    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:18.431809    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:18.431936    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:18.444378    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:18.444443    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:18.455559    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:18.455644    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:18.466476    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:18.466558    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:18.482928    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:18.482994    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:18.493234    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:18.493293    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:18.503615    4671 logs.go:276] 0 containers: []
	W0729 10:39:18.503626    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:18.503681    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:18.518289    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:18.518318    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:18.518327    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:18.627725    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:18.627738    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:18.641975    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:18.641988    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:18.653897    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:18.653910    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:18.665613    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:18.665625    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:18.681424    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:18.681435    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:18.692992    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:18.693008    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:18.707043    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:18.707054    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:18.719171    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:18.719183    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:18.723419    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:18.723425    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:18.738571    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:18.738581    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:18.750333    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:18.750343    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:18.775956    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:18.775965    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:18.815190    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:18.815200    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:18.828652    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:18.828665    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:18.856404    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:18.856415    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:18.871745    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:18.871755    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:21.390924    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:26.393052    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:26.393161    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:26.404649    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:26.404723    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:26.416082    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:26.416151    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:26.427816    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:26.427897    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:26.439181    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:26.439320    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:26.450454    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:26.450515    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:26.468114    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:26.468181    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:26.479173    4671 logs.go:276] 0 containers: []
	W0729 10:39:26.479183    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:26.479243    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:26.494085    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:26.494100    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:26.494105    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:26.533340    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:26.533357    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:26.549020    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:26.549042    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:26.562103    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:26.562116    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:26.576738    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:26.576754    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:26.588898    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:26.588911    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:26.601525    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:26.601537    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:26.627920    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:26.627934    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:26.667745    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:26.667761    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:26.693918    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:26.693936    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:26.709764    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:26.709774    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:26.728994    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:26.729005    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:26.747029    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:26.747041    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:26.751836    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:26.751842    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:26.764433    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:26.764444    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:26.776822    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:26.776834    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:26.794028    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:26.794050    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:29.308607    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:34.310714    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:34.310822    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:34.321898    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:34.321981    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:34.337255    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:34.337327    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:34.348219    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:34.348287    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:34.358621    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:34.358702    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:34.369191    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:34.369263    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:34.379826    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:34.379896    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:34.390105    4671 logs.go:276] 0 containers: []
	W0729 10:39:34.390115    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:34.390176    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:34.400819    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:34.400836    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:34.400842    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:34.438963    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:34.438977    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:34.463761    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:34.463772    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:34.481445    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:34.481454    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:34.485325    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:34.485332    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:34.500750    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:34.500760    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:34.514840    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:34.514850    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:34.526615    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:34.526625    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:34.540889    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:34.540898    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:34.579358    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:34.579365    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:34.590135    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:34.590149    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:34.604992    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:34.605002    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:34.618695    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:34.618706    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:34.644381    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:34.644388    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:34.658381    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:34.658390    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:34.669649    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:34.669660    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:34.687519    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:34.687533    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:37.201210    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:42.203270    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:42.203356    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:42.214759    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:42.214830    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:42.227436    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:42.227509    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:42.238353    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:42.238426    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:42.250243    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:42.250312    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:42.261160    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:42.261241    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:42.273785    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:42.273859    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:42.284740    4671 logs.go:276] 0 containers: []
	W0729 10:39:42.284754    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:42.284817    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:42.295948    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:42.295966    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:42.295972    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:42.300254    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:42.300261    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:42.315858    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:42.315868    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:42.332783    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:42.332794    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:42.351675    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:42.351688    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:42.368392    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:42.368406    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:42.379493    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:42.379503    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:42.406208    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:42.406237    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:42.419377    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:42.419390    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:42.433305    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:42.433316    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:42.459053    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:42.459064    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:42.470973    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:42.470984    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:42.485522    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:42.485537    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:42.503562    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:42.503575    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:42.514804    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:42.514816    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:42.554378    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:42.554391    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:42.590604    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:42.590619    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:45.104963    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:50.105820    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:50.105983    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:50.121357    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:50.121431    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:50.143668    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:50.143736    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:50.156961    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:50.157024    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:50.168156    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:50.168229    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:50.178689    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:50.178754    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:50.189175    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:50.189244    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:50.198739    4671 logs.go:276] 0 containers: []
	W0729 10:39:50.198750    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:50.198803    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:50.209624    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:50.209643    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:50.209649    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:50.214232    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:50.214238    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:50.234124    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:50.234134    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:50.263084    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:50.263094    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:50.277006    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:50.277020    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:50.288822    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:50.288832    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:50.302409    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:50.302419    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:50.314136    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:50.314151    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:50.331180    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:50.331193    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:50.342235    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:50.342245    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:50.381897    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:50.381910    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:50.398976    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:50.398988    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:50.416871    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:50.416883    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:50.428584    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:50.428598    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:50.453223    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:50.453235    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:50.489321    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:50.489332    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:50.501256    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:50.501267    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:39:53.014232    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:39:58.016270    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:39:58.016434    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:39:58.031439    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:39:58.031527    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:39:58.043128    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:39:58.043194    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:39:58.053377    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:39:58.053445    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:39:58.068139    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:39:58.068208    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:39:58.080668    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:39:58.080734    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:39:58.095717    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:39:58.095775    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:39:58.106266    4671 logs.go:276] 0 containers: []
	W0729 10:39:58.106279    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:39:58.106339    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:39:58.116440    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:39:58.116461    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:39:58.116467    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:39:58.141411    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:39:58.141421    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:39:58.155725    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:39:58.155736    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:39:58.167808    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:39:58.167818    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:39:58.205245    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:39:58.205256    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:39:58.240554    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:39:58.240568    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:39:58.252389    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:39:58.252401    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:39:58.270581    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:39:58.270592    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:39:58.295482    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:39:58.295489    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:39:58.299611    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:39:58.299619    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:39:58.313815    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:39:58.313827    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:39:58.325167    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:39:58.325178    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:39:58.337053    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:39:58.337063    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:39:58.350840    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:39:58.350854    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:39:58.367086    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:39:58.367096    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:39:58.384382    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:39:58.384403    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:39:58.396522    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:39:58.396532    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:00.921588    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:05.923351    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:05.923542    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:05.943635    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:05.943736    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:05.958287    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:05.958366    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:05.970626    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:05.970695    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:05.981244    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:05.981312    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:05.993073    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:05.993147    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:06.006276    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:06.006363    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:06.017674    4671 logs.go:276] 0 containers: []
	W0729 10:40:06.017689    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:06.017755    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:06.028711    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:06.028729    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:06.028734    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:06.068621    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:06.068631    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:06.083143    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:06.083160    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:06.095539    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:06.095554    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:06.107608    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:06.107620    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:06.122207    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:06.122217    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:06.133624    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:06.133635    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:06.152392    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:06.152404    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:06.166683    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:06.166693    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:06.177883    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:06.177893    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:06.203525    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:06.203534    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:06.207671    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:06.207676    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:06.241874    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:06.241886    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:06.267260    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:06.267271    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:06.281816    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:06.281825    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:06.293913    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:06.293926    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:06.308994    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:06.309005    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:08.823021    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:13.825156    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:13.825463    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:13.849420    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:13.849540    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:13.865428    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:13.865505    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:13.878086    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:13.878163    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:13.889426    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:13.889497    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:13.902355    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:13.902421    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:13.913586    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:13.913662    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:13.923854    4671 logs.go:276] 0 containers: []
	W0729 10:40:13.923865    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:13.923920    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:13.934053    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:13.934071    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:13.934076    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:13.948011    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:13.948022    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:13.961946    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:13.961956    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:13.976382    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:13.976393    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:13.990759    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:13.990770    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:14.015529    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:14.015538    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:14.052410    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:14.052423    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:14.064005    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:14.064017    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:14.080908    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:14.080919    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:14.098419    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:14.098429    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:14.109770    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:14.109781    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:14.121293    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:14.121304    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:14.159176    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:14.159187    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:14.163405    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:14.163414    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:14.188198    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:14.188211    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:14.199767    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:14.199778    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:14.211977    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:14.211990    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:16.725853    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:21.727642    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:21.727851    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:21.743894    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:21.743973    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:21.756779    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:21.756858    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:21.767935    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:21.768006    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:21.778240    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:21.778311    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:21.788704    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:21.788770    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:21.798986    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:21.799048    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:21.809310    4671 logs.go:276] 0 containers: []
	W0729 10:40:21.809323    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:21.809385    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:21.820604    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:21.820623    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:21.820630    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:21.825353    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:21.825360    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:21.864261    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:21.864271    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:21.878462    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:21.878476    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:21.912378    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:21.912391    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:21.926774    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:21.926785    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:21.938151    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:21.938164    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:21.952066    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:21.952079    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:21.963873    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:21.963884    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:21.981366    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:21.981376    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:21.992522    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:21.992534    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:22.016339    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:22.016347    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:22.027789    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:22.027799    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:22.064879    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:22.064886    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:22.093055    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:22.093067    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:22.107643    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:22.107656    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:22.123115    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:22.123125    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:24.636519    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:29.638883    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:29.639131    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:29.657542    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:29.657635    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:29.672963    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:29.673043    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:29.684319    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:29.684395    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:29.700196    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:29.700283    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:29.710643    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:29.710711    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:29.721348    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:29.721407    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:29.732149    4671 logs.go:276] 0 containers: []
	W0729 10:40:29.732163    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:29.732225    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:29.743182    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:29.743203    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:29.743210    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:29.755601    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:29.755618    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:29.775050    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:29.775063    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:29.794533    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:29.794544    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:29.806157    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:29.806170    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:29.810493    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:29.810503    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:29.826613    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:29.826623    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:29.838584    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:29.838597    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:29.850237    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:29.850248    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:29.873268    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:29.873276    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:29.891176    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:29.891190    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:29.916280    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:29.916298    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:29.930773    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:29.930784    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:29.945715    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:29.945726    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:29.958246    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:29.958256    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:29.995693    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:29.995701    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:30.030155    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:30.030167    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:32.543339    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:37.545764    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:37.546110    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:37.579958    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:37.580094    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:37.600966    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:37.601059    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:37.615948    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:37.616016    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:37.628679    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:37.628750    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:37.643248    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:37.643331    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:37.656508    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:37.656565    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:37.667119    4671 logs.go:276] 0 containers: []
	W0729 10:40:37.667134    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:37.667196    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:37.678031    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:37.678047    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:37.678052    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:37.716627    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:37.716640    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:37.731626    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:37.731636    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:37.746527    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:37.746538    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:37.769449    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:37.769464    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:37.773974    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:37.773980    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:37.808456    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:37.808474    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:37.834563    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:37.834580    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:37.845947    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:37.845960    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:37.857779    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:37.857790    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:37.869846    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:37.869860    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:37.887726    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:37.887738    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:37.903566    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:37.903582    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:37.915966    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:37.915977    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:37.928599    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:37.928612    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:37.943144    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:37.943158    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:37.956155    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:37.956166    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:40.480867    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:45.482844    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:45.482936    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:45.496002    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:45.496080    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:45.507349    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:45.507415    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:45.518690    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:45.518757    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:45.531292    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:45.531358    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:45.542419    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:45.542490    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:45.553902    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:45.553976    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:45.565416    4671 logs.go:276] 0 containers: []
	W0729 10:40:45.565427    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:45.565481    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:45.576387    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:45.576405    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:45.576411    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:45.618274    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:45.618295    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:45.658709    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:45.658718    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:45.672280    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:45.672292    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:45.698762    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:45.698776    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:45.714915    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:45.714927    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:45.730337    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:45.730354    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:45.749771    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:45.749787    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:45.754629    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:45.754639    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:45.770988    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:45.770999    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:45.785569    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:45.785590    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:45.799379    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:45.799388    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:45.811243    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:45.811255    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:45.834989    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:45.834996    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:45.848576    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:45.848588    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:45.860498    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:45.860511    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:45.874102    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:45.874111    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:48.387411    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:40:53.389555    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:40:53.389839    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:40:53.408332    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:40:53.408429    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:40:53.422482    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:40:53.422562    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:40:53.434261    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:40:53.434347    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:40:53.445496    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:40:53.445564    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:40:53.456310    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:40:53.456380    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:40:53.467139    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:40:53.467209    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:40:53.477687    4671 logs.go:276] 0 containers: []
	W0729 10:40:53.477698    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:40:53.477754    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:40:53.488327    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:40:53.488348    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:40:53.488353    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:40:53.505653    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:40:53.505663    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:40:53.528165    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:40:53.528172    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:40:53.539731    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:40:53.539742    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:40:53.554719    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:40:53.554731    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:40:53.566669    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:40:53.566679    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:40:53.580751    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:40:53.580761    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:40:53.592167    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:40:53.592177    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:40:53.596381    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:40:53.596391    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:40:53.610491    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:40:53.610500    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:40:53.635320    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:40:53.635331    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:40:53.649473    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:40:53.649489    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:40:53.665582    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:40:53.665592    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:40:53.677844    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:40:53.677854    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:40:53.689584    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:40:53.689595    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:40:53.727236    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:40:53.727251    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:40:53.762078    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:40:53.762090    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:40:56.344401    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:01.346616    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:01.346764    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:01.364783    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:01.364855    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:01.376987    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:01.377046    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:01.387014    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:01.387083    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:01.397336    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:01.397410    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:01.407351    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:01.407419    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:01.421113    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:01.421186    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:01.431986    4671 logs.go:276] 0 containers: []
	W0729 10:41:01.431997    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:01.432049    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:01.442431    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:01.442449    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:01.442455    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:01.454577    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:01.454587    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:01.466279    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:01.466291    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:01.504734    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:01.504745    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:01.544246    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:01.544258    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:01.569835    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:01.569845    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:01.586207    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:01.586216    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:01.599308    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:01.599320    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:01.607025    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:01.607035    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:01.618529    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:01.618542    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:01.633829    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:01.633840    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:01.652447    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:01.652458    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:01.666571    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:01.666581    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:01.681308    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:01.681318    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:01.692624    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:01.692634    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:01.704384    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:01.704399    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:01.719207    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:01.719218    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:04.245470    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:09.247762    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:09.248017    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:09.269536    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:09.269664    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:09.285959    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:09.286036    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:09.298510    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:09.298580    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:09.309545    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:09.309620    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:09.319636    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:09.319704    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:09.330103    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:09.330175    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:09.339939    4671 logs.go:276] 0 containers: []
	W0729 10:41:09.339950    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:09.340007    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:09.350701    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:09.350717    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:09.350723    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:09.364478    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:09.364491    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:09.378865    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:09.378882    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:09.390362    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:09.390374    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:09.402559    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:09.402569    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:09.439942    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:09.439952    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:09.475189    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:09.475201    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:09.489922    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:09.489934    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:09.501758    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:09.501769    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:09.513068    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:09.513081    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:09.517186    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:09.517195    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:09.534687    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:09.534698    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:09.546563    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:09.546573    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:09.564740    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:09.564751    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:09.584521    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:09.584533    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:09.611580    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:09.611595    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:09.629926    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:09.629939    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:12.156239    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:17.158419    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:17.158571    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:17.169235    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:17.169309    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:17.179922    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:17.179985    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:17.190683    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:17.190753    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:17.201185    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:17.201258    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:17.211684    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:17.211758    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:17.222184    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:17.222251    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:17.232507    4671 logs.go:276] 0 containers: []
	W0729 10:41:17.232520    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:17.232575    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:17.242861    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:17.242889    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:17.242896    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:17.277312    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:17.277325    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:17.292399    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:17.292409    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:17.316396    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:17.316403    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:17.327832    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:17.327844    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:17.342669    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:17.342680    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:17.354494    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:17.354505    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:17.393200    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:17.393206    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:17.397239    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:17.397247    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:17.415251    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:17.415262    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:17.427089    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:17.427102    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:17.438266    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:17.438276    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:17.452566    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:17.452577    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:17.477565    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:17.477576    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:17.489009    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:17.489022    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:17.506420    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:17.506433    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:17.520464    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:17.520475    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:20.034066    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:25.036215    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:25.036482    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:25.056754    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:25.056845    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:25.071358    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:25.071442    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:25.083416    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:25.083489    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:25.094428    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:25.094501    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:25.104800    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:25.104862    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:25.115865    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:25.115930    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:25.128583    4671 logs.go:276] 0 containers: []
	W0729 10:41:25.128598    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:25.128664    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:25.140530    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:25.140550    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:25.140555    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:25.151846    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:25.151858    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:25.174542    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:25.174549    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:25.192117    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:25.192130    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:25.203754    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:25.203765    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:25.217843    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:25.217853    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:25.229304    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:25.229316    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:25.243704    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:25.243714    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:25.278138    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:25.278151    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:25.300347    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:25.300358    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:25.311573    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:25.311584    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:25.324385    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:25.324396    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:25.336262    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:25.336271    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:25.356592    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:25.356602    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:25.395751    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:25.395759    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:25.399850    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:25.399857    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:25.424980    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:25.424990    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:27.952688    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:32.954813    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:32.955087    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:32.981331    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:32.981434    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:32.999521    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:32.999602    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:33.012978    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:33.013039    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:33.027672    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:33.027741    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:33.038151    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:33.038211    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:33.048355    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:33.048423    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:33.058333    4671 logs.go:276] 0 containers: []
	W0729 10:41:33.058344    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:33.058400    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:33.068640    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:33.068657    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:33.068663    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:33.072902    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:33.072908    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:33.086972    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:33.086981    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:33.100993    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:33.101004    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:33.112241    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:33.112255    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:33.127338    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:33.127347    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:33.140477    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:33.140488    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:33.164892    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:33.164906    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:33.177227    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:33.177238    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:33.191810    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:33.191821    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:33.211626    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:33.211635    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:33.223733    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:33.223744    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:33.246193    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:33.246202    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:33.284183    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:33.284191    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:33.318916    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:33.318927    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:33.336807    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:33.336818    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:33.350691    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:33.350703    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:35.863854    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:40.866021    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:40.866402    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:40.900638    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:40.900766    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:40.919337    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:40.919432    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:40.936012    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:40.936084    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:40.947844    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:40.947919    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:40.958523    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:40.958593    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:40.980106    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:40.980179    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:40.989938    4671 logs.go:276] 0 containers: []
	W0729 10:41:40.989952    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:40.990012    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:41.000693    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:41.000710    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:41.000716    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:41.004773    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:41.004783    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:41.029471    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:41.029484    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:41.052796    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:41.052804    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:41.063852    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:41.063863    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:41.101037    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:41.101044    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:41.114690    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:41.114706    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:41.129317    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:41.129327    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:41.142168    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:41.142183    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:41.156409    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:41.156419    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:41.176062    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:41.176072    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:41.193631    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:41.193644    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:41.205921    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:41.205934    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:41.217584    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:41.217599    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:41.257559    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:41.257570    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:41.280188    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:41.280203    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:41.300622    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:41.300635    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:43.820295    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:48.822455    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:48.822867    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:48.854526    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:48.854664    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:48.874310    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:48.874413    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:48.889980    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:48.890047    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:48.901994    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:48.902066    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:48.912749    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:48.912813    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:48.923740    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:48.923808    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:48.934431    4671 logs.go:276] 0 containers: []
	W0729 10:41:48.934445    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:48.934502    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:48.945311    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:48.945330    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:48.945335    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:48.950155    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:48.950164    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:48.966298    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:48.966309    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:48.989644    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:48.989659    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:49.001023    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:49.001035    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:49.015259    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:49.015271    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:49.027018    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:49.027030    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:49.038832    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:49.038847    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:49.056730    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:49.056741    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:49.070512    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:49.070526    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:49.110514    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:49.110523    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:49.135296    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:49.135307    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:49.149498    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:49.149511    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:49.162835    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:49.162846    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:49.197077    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:49.197091    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:49.211447    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:49.211458    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:49.226778    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:49.226789    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:51.741094    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:41:56.743453    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:41:56.743832    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:41:56.780914    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:41:56.781054    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:41:56.802280    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:41:56.802378    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:41:56.820117    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:41:56.820186    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:41:56.832215    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:41:56.832290    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:41:56.842921    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:41:56.842981    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:41:56.853226    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:41:56.853311    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:41:56.865596    4671 logs.go:276] 0 containers: []
	W0729 10:41:56.865607    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:41:56.865670    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:41:56.881232    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:41:56.881251    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:41:56.881256    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:41:56.896638    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:41:56.896648    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:41:56.931262    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:41:56.931274    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:41:56.946340    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:41:56.946350    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:41:56.958104    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:41:56.958114    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:41:56.981094    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:41:56.981102    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:41:56.994824    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:41:56.994834    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:41:57.006847    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:41:57.006858    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:41:57.032075    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:41:57.032086    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:41:57.043818    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:41:57.043829    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:41:57.060591    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:41:57.060601    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:41:57.097701    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:41:57.097710    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:41:57.101688    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:41:57.101694    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:41:57.115812    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:41:57.115821    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:41:57.126782    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:41:57.126792    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:41:57.140071    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:41:57.140080    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:41:57.154552    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:41:57.154563    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:41:59.666448    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:04.668786    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:04.669121    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:04.704626    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:42:04.704736    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:04.724022    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:42:04.724103    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:04.736794    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:42:04.736858    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:04.747460    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:42:04.747532    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:04.763577    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:42:04.763645    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:04.774507    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:42:04.774569    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:04.784468    4671 logs.go:276] 0 containers: []
	W0729 10:42:04.784482    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:04.784536    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:04.794795    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:42:04.794812    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:42:04.794817    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:42:04.812127    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:04.812138    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:04.837937    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:04.837947    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:42:04.877159    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:42:04.877170    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:42:04.892134    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:42:04.892148    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:42:04.903995    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:42:04.904006    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:42:04.923104    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:42:04.923116    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:42:04.934398    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:42:04.934409    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:42:04.948958    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:42:04.948969    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:42:04.963621    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:42:04.963631    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:42:04.979348    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:42:04.979358    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:04.990669    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:04.990680    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:04.994941    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:04.994952    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:05.029931    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:42:05.029942    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:42:05.054967    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:42:05.054977    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:42:05.070563    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:42:05.070576    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:42:05.081839    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:42:05.081850    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:42:07.594679    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:12.596789    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:12.597022    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:42:12.616533    4671 logs.go:276] 2 containers: [151103ab65a7 911773d2a582]
	I0729 10:42:12.616649    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:42:12.631071    4671 logs.go:276] 2 containers: [a7a479935fa8 3a751030b0c1]
	I0729 10:42:12.631141    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:42:12.642581    4671 logs.go:276] 1 containers: [c4551dbd25d2]
	I0729 10:42:12.642648    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:42:12.653110    4671 logs.go:276] 2 containers: [8ab4bad332af d85e93f01c88]
	I0729 10:42:12.653196    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:42:12.663962    4671 logs.go:276] 1 containers: [e270fae8598c]
	I0729 10:42:12.664033    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:42:12.674413    4671 logs.go:276] 2 containers: [34f0c2d547f7 06ee739538c0]
	I0729 10:42:12.674486    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:42:12.685107    4671 logs.go:276] 0 containers: []
	W0729 10:42:12.685121    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:42:12.685178    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:42:12.697294    4671 logs.go:276] 2 containers: [72e632c3512d 8efdf8826d49]
	I0729 10:42:12.697312    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:42:12.697318    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:42:12.709489    4671 logs.go:123] Gathering logs for kube-apiserver [911773d2a582] ...
	I0729 10:42:12.709500    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911773d2a582"
	I0729 10:42:12.734360    4671 logs.go:123] Gathering logs for coredns [c4551dbd25d2] ...
	I0729 10:42:12.734369    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4551dbd25d2"
	I0729 10:42:12.745835    4671 logs.go:123] Gathering logs for kube-scheduler [8ab4bad332af] ...
	I0729 10:42:12.745847    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ab4bad332af"
	I0729 10:42:12.758245    4671 logs.go:123] Gathering logs for kube-scheduler [d85e93f01c88] ...
	I0729 10:42:12.758260    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d85e93f01c88"
	I0729 10:42:12.773235    4671 logs.go:123] Gathering logs for storage-provisioner [72e632c3512d] ...
	I0729 10:42:12.773250    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72e632c3512d"
	I0729 10:42:12.784503    4671 logs.go:123] Gathering logs for storage-provisioner [8efdf8826d49] ...
	I0729 10:42:12.784513    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8efdf8826d49"
	I0729 10:42:12.795648    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:42:12.795662    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:42:12.833061    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:42:12.833075    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:42:12.867357    4671 logs.go:123] Gathering logs for etcd [3a751030b0c1] ...
	I0729 10:42:12.867369    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a751030b0c1"
	I0729 10:42:12.882417    4671 logs.go:123] Gathering logs for kube-controller-manager [34f0c2d547f7] ...
	I0729 10:42:12.882428    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34f0c2d547f7"
	I0729 10:42:12.899810    4671 logs.go:123] Gathering logs for kube-controller-manager [06ee739538c0] ...
	I0729 10:42:12.899826    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ee739538c0"
	I0729 10:42:12.914484    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:42:12.914494    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:42:12.935827    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:42:12.935835    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:42:12.939919    4671 logs.go:123] Gathering logs for etcd [a7a479935fa8] ...
	I0729 10:42:12.939928    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a479935fa8"
	I0729 10:42:12.959766    4671 logs.go:123] Gathering logs for kube-apiserver [151103ab65a7] ...
	I0729 10:42:12.959779    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 151103ab65a7"
	I0729 10:42:12.974286    4671 logs.go:123] Gathering logs for kube-proxy [e270fae8598c] ...
	I0729 10:42:12.974297    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e270fae8598c"
	I0729 10:42:15.487558    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:20.489778    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:20.489873    4671 kubeadm.go:597] duration metric: took 4m3.593169292s to restartPrimaryControlPlane
	W0729 10:42:20.489928    4671 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 10:42:20.489952    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 10:42:21.534981    4671 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.045045666s)
	I0729 10:42:21.535060    4671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:42:21.539851    4671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:42:21.542514    4671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:42:21.545368    4671 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:42:21.545374    4671 kubeadm.go:157] found existing configuration files:
	
	I0729 10:42:21.545405    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf
	I0729 10:42:21.548167    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:42:21.548194    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:42:21.550717    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf
	I0729 10:42:21.553392    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:42:21.553414    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:42:21.556622    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf
	I0729 10:42:21.559846    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:42:21.559869    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:42:21.562384    4671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf
	I0729 10:42:21.565128    4671 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:42:21.565150    4671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:42:21.568173    4671 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:42:21.585270    4671 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 10:42:21.585374    4671 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:42:21.633076    4671 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:42:21.633134    4671 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:42:21.633179    4671 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:42:21.683381    4671 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:42:21.687531    4671 out.go:204]   - Generating certificates and keys ...
	I0729 10:42:21.687568    4671 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:42:21.687612    4671 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:42:21.687667    4671 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 10:42:21.687699    4671 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 10:42:21.687742    4671 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 10:42:21.687769    4671 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 10:42:21.687800    4671 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 10:42:21.687833    4671 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 10:42:21.687874    4671 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 10:42:21.687922    4671 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 10:42:21.687951    4671 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 10:42:21.687981    4671 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:42:21.784077    4671 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:42:21.880964    4671 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:42:21.976287    4671 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:42:22.033454    4671 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:42:22.062827    4671 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:42:22.063183    4671 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:42:22.063222    4671 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:42:22.151907    4671 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:42:22.155823    4671 out.go:204]   - Booting up control plane ...
	I0729 10:42:22.155873    4671 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:42:22.155922    4671 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:42:22.155975    4671 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:42:22.156018    4671 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:42:22.156098    4671 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 10:42:26.654014    4671 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.500996 seconds
	I0729 10:42:26.654075    4671 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:42:26.658022    4671 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:42:27.175118    4671 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:42:27.175425    4671 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-396000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:42:27.678086    4671 kubeadm.go:310] [bootstrap-token] Using token: xjj04q.3qhbk0y1mpomvu5q
	I0729 10:42:27.684260    4671 out.go:204]   - Configuring RBAC rules ...
	I0729 10:42:27.684334    4671 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:42:27.684383    4671 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:42:27.686071    4671 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:42:27.690868    4671 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:42:27.691898    4671 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:42:27.692753    4671 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:42:27.695863    4671 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:42:27.864969    4671 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:42:28.082936    4671 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:42:28.083451    4671 kubeadm.go:310] 
	I0729 10:42:28.083488    4671 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:42:28.083491    4671 kubeadm.go:310] 
	I0729 10:42:28.083537    4671 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:42:28.083540    4671 kubeadm.go:310] 
	I0729 10:42:28.083552    4671 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:42:28.083582    4671 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:42:28.083614    4671 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:42:28.083619    4671 kubeadm.go:310] 
	I0729 10:42:28.083650    4671 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:42:28.083653    4671 kubeadm.go:310] 
	I0729 10:42:28.083684    4671 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:42:28.083691    4671 kubeadm.go:310] 
	I0729 10:42:28.083717    4671 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:42:28.083753    4671 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:42:28.083791    4671 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:42:28.083794    4671 kubeadm.go:310] 
	I0729 10:42:28.083836    4671 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:42:28.083888    4671 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:42:28.083891    4671 kubeadm.go:310] 
	I0729 10:42:28.083945    4671 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xjj04q.3qhbk0y1mpomvu5q \
	I0729 10:42:28.083997    4671 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e543544bbdf55d58d5e8ecb84a321dadc33a389aefb88a9b79f2e5e89d2eeaba \
	I0729 10:42:28.084009    4671 kubeadm.go:310] 	--control-plane 
	I0729 10:42:28.084012    4671 kubeadm.go:310] 
	I0729 10:42:28.084064    4671 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:42:28.084068    4671 kubeadm.go:310] 
	I0729 10:42:28.084109    4671 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xjj04q.3qhbk0y1mpomvu5q \
	I0729 10:42:28.084156    4671 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e543544bbdf55d58d5e8ecb84a321dadc33a389aefb88a9b79f2e5e89d2eeaba 
	I0729 10:42:28.084267    4671 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:42:28.084343    4671 cni.go:84] Creating CNI manager for ""
	I0729 10:42:28.084352    4671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:42:28.091987    4671 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:42:28.096031    4671 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:42:28.099465    4671 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:42:28.104534    4671 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:42:28.104607    4671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:42:28.104658    4671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-396000 minikube.k8s.io/updated_at=2024_07_29T10_42_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8b24aa06450b07a59980f53ae4b9b78f9c5a1899 minikube.k8s.io/name=stopped-upgrade-396000 minikube.k8s.io/primary=true
	I0729 10:42:28.149105    4671 ops.go:34] apiserver oom_adj: -16
	I0729 10:42:28.149111    4671 kubeadm.go:1113] duration metric: took 44.546584ms to wait for elevateKubeSystemPrivileges
	I0729 10:42:28.149126    4671 kubeadm.go:394] duration metric: took 4m11.265467167s to StartCluster
	I0729 10:42:28.149137    4671 settings.go:142] acquiring lock: {Name:mk00a8a4362ef98c344b6c02e7313691374680e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:28.149226    4671 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:42:28.149622    4671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/kubeconfig: {Name:mk69e1ff39ac907f2664a3f00c50d678e5bdc356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:42:28.149820    4671 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:42:28.149904    4671 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:42:28.149914    4671 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:42:28.149977    4671 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-396000"
	I0729 10:42:28.149992    4671 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-396000"
	I0729 10:42:28.149993    4671 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-396000"
	W0729 10:42:28.149996    4671 addons.go:243] addon storage-provisioner should already be in state true
	I0729 10:42:28.150002    4671 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-396000"
	I0729 10:42:28.150008    4671 host.go:66] Checking if "stopped-upgrade-396000" exists ...
	I0729 10:42:28.150442    4671 retry.go:31] will retry after 1.184300369s: connect: dial unix /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/monitor: connect: connection refused
	I0729 10:42:28.153982    4671 out.go:177] * Verifying Kubernetes components...
	I0729 10:42:28.161916    4671 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:42:28.166073    4671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:42:28.169029    4671 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:42:28.169036    4671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:42:28.169042    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:42:28.253865    4671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:42:28.258956    4671 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:42:28.258996    4671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:42:28.262986    4671 api_server.go:72] duration metric: took 113.159166ms to wait for apiserver process to appear ...
	I0729 10:42:28.262995    4671 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:42:28.263001    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:28.274907    4671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:42:29.336020    4671 kapi.go:59] client config for stopped-upgrade-396000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/stopped-upgrade-396000/client.key", CAFile:"/Users/jenkins/minikube-integration/19345-1151/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044f80c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:42:29.336165    4671 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-396000"
	W0729 10:42:29.336171    4671 addons.go:243] addon default-storageclass should already be in state true
	I0729 10:42:29.336184    4671 host.go:66] Checking if "stopped-upgrade-396000" exists ...
	I0729 10:42:29.336765    4671 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:42:29.336771    4671 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:42:29.336777    4671 sshutil.go:53] new ssh client: &{IP:localhost Port:50491 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/stopped-upgrade-396000/id_rsa Username:docker}
	I0729 10:42:29.365369    4671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:42:33.264171    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:33.264189    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:38.264781    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:38.264823    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:43.264938    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:43.264964    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:48.265117    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:48.265174    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:53.265489    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:53.265530    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:42:58.266024    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:42:58.266067    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 10:42:59.426094    4671 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 10:42:59.430554    4671 out.go:177] * Enabled addons: storage-provisioner
	I0729 10:42:59.442424    4671 addons.go:510] duration metric: took 31.293490375s for enable addons: enabled=[storage-provisioner]
	I0729 10:43:03.266683    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:03.266744    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:08.268097    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:08.268144    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:13.269393    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:13.269430    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:18.271033    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:18.271082    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:23.271292    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:23.271336    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:28.273456    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:28.273569    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:28.287881    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:43:28.287940    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:28.298859    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:43:28.298926    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:28.309204    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:43:28.309270    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:28.319463    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:43:28.319526    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:28.329729    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:43:28.329797    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:28.340707    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:43:28.340776    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:28.351176    4671 logs.go:276] 0 containers: []
	W0729 10:43:28.351190    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:28.351249    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:28.361343    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:43:28.361357    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:43:28.361362    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:43:28.375897    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:43:28.375910    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:43:28.392592    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:43:28.392602    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:43:28.407227    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:43:28.407237    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:43:28.425418    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:43:28.425427    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:43:28.436209    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:43:28.436220    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:28.449820    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:28.449831    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:43:28.488622    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:28.488637    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:28.492864    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:28.492872    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:28.527895    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:43:28.527907    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:43:28.541992    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:43:28.542003    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:43:28.553829    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:43:28.553843    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:43:28.569401    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:28.569413    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:31.097001    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:36.099543    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:36.099729    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:36.114001    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:43:36.114084    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:36.125915    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:43:36.125992    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:36.137335    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:43:36.137410    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:36.147417    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:43:36.147477    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:36.157597    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:43:36.157670    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:36.167345    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:43:36.167404    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:36.177535    4671 logs.go:276] 0 containers: []
	W0729 10:43:36.177550    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:36.177609    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:36.190402    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:43:36.190418    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:43:36.190424    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:43:36.202228    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:43:36.202240    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:43:36.219219    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:36.219229    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:36.242609    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:36.242620    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:43:36.278535    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:36.278544    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:36.282446    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:36.282455    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:36.317359    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:43:36.317369    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:43:36.331446    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:43:36.331456    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:43:36.345630    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:43:36.345644    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:36.357238    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:43:36.357251    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:43:36.371853    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:43:36.371865    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:43:36.386950    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:43:36.386962    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:43:36.398118    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:43:36.398131    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:43:38.911402    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:43.913582    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:43.913748    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:43.930020    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:43:43.930103    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:43.944058    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:43:43.944128    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:43.954834    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:43:43.954895    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:43.966706    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:43:43.966766    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:43.977629    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:43:43.977694    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:43.990328    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:43:43.990384    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:43.999815    4671 logs.go:276] 0 containers: []
	W0729 10:43:43.999826    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:43.999878    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:44.010409    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:43:44.010424    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:43:44.010430    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:43:44.025499    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:43:44.025509    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:43:44.037115    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:43:44.037124    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:43:44.055106    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:44.055118    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:43:44.094290    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:44.094301    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:44.098425    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:44.098433    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:44.131779    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:43:44.131791    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:43:44.153251    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:43:44.153264    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:43:44.164901    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:43:44.164911    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:43:44.179149    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:43:44.179161    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:43:44.190605    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:43:44.190615    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:43:44.202100    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:44.202110    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:44.225458    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:43:44.225464    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:46.738452    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:51.740677    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:51.740850    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:51.753227    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:43:51.753304    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:51.764518    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:43:51.764599    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:51.775348    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:43:51.775424    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:51.793709    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:43:51.793782    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:51.809976    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:43:51.810046    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:51.820543    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:43:51.820605    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:51.830988    4671 logs.go:276] 0 containers: []
	W0729 10:43:51.831003    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:51.831060    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:51.841574    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:43:51.841588    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:51.841594    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:51.879039    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:43:51.879051    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:43:51.893688    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:43:51.893699    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:43:51.905339    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:43:51.905350    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:43:51.923048    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:43:51.923063    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:43:51.941090    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:51.941103    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:43:51.964863    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:51.964871    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:43:52.003838    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:52.003851    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:52.008611    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:43:52.008620    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:43:52.029650    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:43:52.029663    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:43:52.041132    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:43:52.041145    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:43:52.055986    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:43:52.055998    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:43:52.067448    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:43:52.067460    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:54.581314    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:43:59.583406    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:43:59.583528    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:43:59.596818    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:43:59.596893    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:43:59.607700    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:43:59.607771    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:43:59.618744    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:43:59.618817    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:43:59.629593    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:43:59.629660    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:43:59.639961    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:43:59.640031    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:43:59.653909    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:43:59.653977    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:43:59.663933    4671 logs.go:276] 0 containers: []
	W0729 10:43:59.663945    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:43:59.664001    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:43:59.674964    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:43:59.674980    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:43:59.674985    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:43:59.711640    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:43:59.711649    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:43:59.747201    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:43:59.747215    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:43:59.762058    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:43:59.762067    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:43:59.775709    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:43:59.775718    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:43:59.786954    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:43:59.786963    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:43:59.801997    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:43:59.802008    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:43:59.814014    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:43:59.814024    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:43:59.825806    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:43:59.825817    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:43:59.830103    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:43:59.830109    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:43:59.841690    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:43:59.841702    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:43:59.863177    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:43:59.863190    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:43:59.874762    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:43:59.874772    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:44:02.400797    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:44:07.402486    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:44:07.402582    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:44:07.425710    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:44:07.425802    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:44:07.444627    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:44:07.444717    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:44:07.464291    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:44:07.464380    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:44:07.498789    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:44:07.498858    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:44:07.510407    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:44:07.510502    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:44:07.544048    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:44:07.544125    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:44:07.558809    4671 logs.go:276] 0 containers: []
	W0729 10:44:07.558822    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:44:07.558889    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:44:07.570332    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:44:07.570350    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:44:07.570356    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:44:07.612717    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:44:07.612730    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:44:07.634368    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:44:07.634382    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:44:07.654185    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:44:07.654200    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:44:07.667729    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:44:07.667744    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:44:07.685986    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:44:07.686007    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:44:07.701706    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:44:07.701718    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:44:07.725905    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:44:07.725926    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:44:07.732993    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:44:07.733007    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:44:07.764366    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:44:07.764385    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:44:07.785025    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:44:07.785037    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:44:07.798247    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:44:07.798260    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:44:07.810518    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:44:07.810530    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:44:10.354721    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:44:15.357420    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:44:15.357879    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:44:15.408501    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:44:15.408621    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:44:15.426012    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:44:15.426103    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:44:15.439240    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:44:15.439306    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:44:15.450997    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:44:15.451048    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:44:15.461957    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:44:15.462019    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:44:15.474335    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:44:15.474406    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:44:15.485745    4671 logs.go:276] 0 containers: []
	W0729 10:44:15.485758    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:44:15.485822    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:44:15.498338    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:44:15.498356    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:44:15.498362    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:44:15.516112    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:44:15.516124    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:44:15.532186    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:44:15.532200    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:44:15.545269    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:44:15.545287    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:44:15.565443    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:44:15.565455    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:44:15.606775    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:44:15.606804    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:44:15.612894    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:44:15.612912    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:44:15.649926    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:44:15.649935    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:44:15.661879    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:44:15.661890    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:44:15.680603    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:44:15.680614    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:44:15.692990    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:44:15.693001    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:44:15.710877    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:44:15.710887    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:44:15.722610    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:44:15.722621    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:44:18.250424    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:44:23.252563    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:44:23.253006    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:44:23.291103    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:44:23.291263    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:44:23.311998    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:44:23.312095    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:44:23.326845    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:44:23.326909    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:44:23.338809    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:44:23.338865    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:44:23.349705    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:44:23.349761    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:44:23.360127    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:44:23.360189    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:44:23.370318    4671 logs.go:276] 0 containers: []
	W0729 10:44:23.370332    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:44:23.370393    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:44:23.380331    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:44:23.380344    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:44:23.380350    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:44:23.396728    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:44:23.396740    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:44:23.408224    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:44:23.408236    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:44:23.419392    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:44:23.419404    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:44:23.443027    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:44:23.443036    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:44:23.454079    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:44:23.454091    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:44:23.490396    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:44:23.490404    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:44:23.494706    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:44:23.494714    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:44:23.508319    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:44:23.508328    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:44:23.519650    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:44:23.519661    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:44:23.536871    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:44:23.536880    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:44:23.570972    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:44:23.570985    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:44:23.582663    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:44:23.582677    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:44:26.098748    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:44:31.099923    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:44:31.100294    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:44:31.137241    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:44:31.137357    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:44:31.156077    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:44:31.156155    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:44:31.170540    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:44:31.170599    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:44:31.182456    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:44:31.182526    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:44:31.193090    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:44:31.193148    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:44:31.203996    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:44:31.204062    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:44:31.214785    4671 logs.go:276] 0 containers: []
	W0729 10:44:31.214796    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:44:31.214853    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:44:31.225433    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:44:31.225446    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:44:31.225451    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:44:31.239859    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:44:31.239869    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:44:31.252597    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:44:31.252610    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:44:31.274092    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:44:31.274105    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:44:31.299191    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:44:31.299198    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:44:31.303823    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:44:31.303830    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:44:31.318339    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:44:31.318351    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:44:31.333191    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:44:31.333203    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:44:31.344916    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:44:31.344928    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:44:31.356572    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:44:31.356585    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:44:31.392456    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:44:31.392465    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:44:31.427056    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:44:31.427067    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:44:31.438945    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:44:31.438958    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:44:33.958710    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:44:38.960430    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:44:38.960563    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:44:38.975028    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:44:38.975100    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:44:38.986748    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:44:38.986816    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:44:38.997444    4671 logs.go:276] 2 containers: [38e59427cdd0 9079d201f3f1]
	I0729 10:44:38.997512    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:44:39.007516    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:44:39.007578    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:44:39.017749    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:44:39.017819    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:44:39.027988    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:44:39.028051    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:44:39.039500    4671 logs.go:276] 0 containers: []
	W0729 10:44:39.039511    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:44:39.039575    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:44:39.049386    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:44:39.049403    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:44:39.049408    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:44:39.063145    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:44:39.063156    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:44:39.075079    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:44:39.075091    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:44:39.086331    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:44:39.086342    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:44:39.100762    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:44:39.100772    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:44:39.112402    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:44:39.112411    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:44:39.116544    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:44:39.116551    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:44:39.174443    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:44:39.174453    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:44:39.189339    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:44:39.189350    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:44:39.212539    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:44:39.212549    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:44:39.223756    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:44:39.223765    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:44:39.262081    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:44:39.262094    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:44:39.280501    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:44:39.280511    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:44:41.794218    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:44:46.796940    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:44:46.797313    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:44:46.838020    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:44:46.838140    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:44:46.860527    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:44:46.860630    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:44:46.875461    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:44:46.875537    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:44:46.887395    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:44:46.887459    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:44:46.898111    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:44:46.898176    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:44:46.908576    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:44:46.908638    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:44:46.920182    4671 logs.go:276] 0 containers: []
	W0729 10:44:46.920194    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:44:46.920259    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:44:46.930735    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:44:46.930753    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:44:46.930759    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:44:46.944814    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:44:46.944826    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:44:46.956603    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:44:46.956615    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:44:46.967450    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:44:46.967461    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:44:46.981661    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:44:46.981670    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:44:47.019981    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:44:47.019991    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:44:47.034056    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:44:47.034066    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:44:47.051600    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:44:47.051610    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:44:47.055846    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:44:47.055855    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:44:47.091862    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:44:47.091876    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:44:47.104486    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:44:47.104497    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:44:47.127965    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:44:47.127974    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:44:47.139275    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:44:47.139288    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:44:47.150983    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:44:47.150998    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:44:47.162556    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:44:47.162566    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:44:49.675985    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:44:54.678227    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:44:54.678638    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:44:54.718293    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:44:54.718426    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:44:54.740466    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:44:54.740572    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:44:54.756112    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:44:54.756200    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:44:54.768897    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:44:54.768962    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:44:54.780243    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:44:54.780313    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:44:54.790857    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:44:54.790922    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:44:54.800970    4671 logs.go:276] 0 containers: []
	W0729 10:44:54.800980    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:44:54.801030    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:44:54.811960    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:44:54.811977    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:44:54.811983    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:44:54.823430    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:44:54.823443    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:44:54.847138    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:44:54.847147    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:44:54.851397    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:44:54.851405    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:44:54.865299    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:44:54.865312    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:44:54.877386    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:44:54.877399    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:44:54.898147    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:44:54.898159    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:44:54.921405    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:44:54.921416    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:44:54.933399    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:44:54.933412    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:44:54.944867    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:44:54.944880    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:44:54.956941    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:44:54.956953    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:44:54.969059    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:44:54.969071    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:44:55.007879    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:44:55.007890    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:44:55.049637    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:44:55.049652    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:44:55.070102    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:44:55.070118    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:44:57.584518    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:45:02.587110    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:45:02.587392    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:45:02.616445    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:45:02.616571    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:45:02.635342    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:45:02.635441    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:45:02.650373    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:45:02.650445    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:45:02.662469    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:45:02.662534    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:45:02.674047    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:45:02.674108    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:45:02.684444    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:45:02.684517    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:45:02.694873    4671 logs.go:276] 0 containers: []
	W0729 10:45:02.694885    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:45:02.694936    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:45:02.705000    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:45:02.705021    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:45:02.705026    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:45:02.743021    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:45:02.743027    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:45:02.776639    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:45:02.776652    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:45:02.787637    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:45:02.787650    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:45:02.804227    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:45:02.804238    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:45:02.828652    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:45:02.828664    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:45:02.853810    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:45:02.853820    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:45:02.867419    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:45:02.867429    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:45:02.881909    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:45:02.881921    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:45:02.897248    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:45:02.897259    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:45:02.916543    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:45:02.916552    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:45:02.928193    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:45:02.928205    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:45:02.939645    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:45:02.939659    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:45:02.943693    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:45:02.943702    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:45:02.964308    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:45:02.964318    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:45:05.478541    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:45:10.480852    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:45:10.481021    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:45:10.505012    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:45:10.505100    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:45:10.520364    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:45:10.520428    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:45:10.533284    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:45:10.533360    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:45:10.544685    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:45:10.544744    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:45:10.554881    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:45:10.554941    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:45:10.565822    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:45:10.565878    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:45:10.575894    4671 logs.go:276] 0 containers: []
	W0729 10:45:10.575905    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:45:10.575967    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:45:10.586673    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:45:10.586691    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:45:10.586696    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:45:10.620920    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:45:10.620933    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:45:10.632214    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:45:10.632225    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:45:10.643719    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:45:10.643731    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:45:10.667904    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:45:10.667915    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:45:10.690586    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:45:10.690598    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:45:10.702096    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:45:10.702106    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:45:10.713737    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:45:10.713747    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:45:10.728155    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:45:10.728168    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:45:10.745191    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:45:10.745203    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:45:10.781002    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:45:10.781011    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:45:10.785183    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:45:10.785191    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:45:10.796979    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:45:10.796991    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:45:10.810799    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:45:10.810809    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:45:10.822513    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:45:10.822524    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:45:13.336237    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:45:18.338303    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:45:18.338579    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:45:18.364486    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:45:18.364597    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:45:18.382226    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:45:18.382315    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:45:18.396819    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:45:18.396891    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:45:18.409287    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:45:18.409345    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:45:18.419603    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:45:18.419657    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:45:18.430131    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:45:18.430193    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:45:18.443016    4671 logs.go:276] 0 containers: []
	W0729 10:45:18.443028    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:45:18.443085    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:45:18.453805    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:45:18.453821    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:45:18.453828    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:45:18.465210    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:45:18.465224    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:45:18.478935    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:45:18.478946    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:45:18.493297    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:45:18.493309    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:45:18.507249    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:45:18.507260    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:45:18.521145    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:45:18.521158    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:45:18.539172    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:45:18.539182    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:45:18.555233    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:45:18.555243    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:45:18.567119    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:45:18.567128    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:45:18.606430    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:45:18.606443    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:45:18.611109    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:45:18.611122    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:45:18.624498    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:45:18.624510    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:45:18.649567    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:45:18.649581    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:45:18.688666    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:45:18.688677    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:45:18.701062    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:45:18.701072    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:45:21.220783    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:45:26.222576    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:45:26.222794    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:45:26.252166    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:45:26.252256    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:45:26.270653    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:45:26.270733    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:45:26.285358    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:45:26.285441    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:45:26.297823    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:45:26.297893    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:45:26.310411    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:45:26.310481    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:45:26.322241    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:45:26.322314    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:45:26.334086    4671 logs.go:276] 0 containers: []
	W0729 10:45:26.334100    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:45:26.334159    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:45:26.348622    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:45:26.348640    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:45:26.348645    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:45:26.365085    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:45:26.365103    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:45:26.383889    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:45:26.383904    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:45:26.405442    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:45:26.405455    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:45:26.418566    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:45:26.418577    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:45:26.444641    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:45:26.444653    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:45:26.457106    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:45:26.457117    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:45:26.498969    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:45:26.498978    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:45:26.515471    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:45:26.515481    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:45:26.551435    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:45:26.551446    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:45:26.585522    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:45:26.585532    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:45:26.603700    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:45:26.603712    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:45:26.614827    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:45:26.614835    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:45:26.618902    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:45:26.618908    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:45:26.635009    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:45:26.635019    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:45:29.149018    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:45:34.150034    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:45:34.150352    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:45:34.179979    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:45:34.180104    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:45:34.199788    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:45:34.199877    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:45:34.213284    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:45:34.213364    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:45:34.224849    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:45:34.224920    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:45:34.235216    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:45:34.235279    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:45:34.245695    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:45:34.245754    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:45:34.259682    4671 logs.go:276] 0 containers: []
	W0729 10:45:34.259699    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:45:34.259757    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:45:34.270250    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:45:34.270267    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:45:34.270272    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:45:34.281906    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:45:34.281918    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:45:34.293437    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:45:34.293447    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:45:34.298187    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:45:34.298195    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:45:34.315279    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:45:34.315289    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:45:34.330852    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:45:34.330864    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:45:34.355648    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:45:34.355658    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:45:34.367161    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:45:34.367171    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:45:34.404513    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:45:34.404520    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:45:34.417674    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:45:34.417686    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:45:34.431994    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:45:34.432006    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:45:34.449644    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:45:34.449653    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:45:34.461404    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:45:34.461414    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:45:34.483005    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:45:34.483018    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:45:34.494860    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:45:34.494869    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:45:37.033650    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:45:42.036223    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:45:42.036327    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:45:42.052170    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:45:42.052223    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:45:42.067774    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:45:42.067820    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:45:42.078921    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:45:42.078978    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:45:42.090086    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:45:42.090138    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:45:42.103486    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:45:42.103542    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:45:42.115452    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:45:42.115506    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:45:42.125840    4671 logs.go:276] 0 containers: []
	W0729 10:45:42.125853    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:45:42.125908    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:45:42.140291    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:45:42.140306    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:45:42.140311    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:45:42.179515    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:45:42.179528    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:45:42.194650    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:45:42.194660    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:45:42.209831    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:45:42.209844    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:45:42.226784    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:45:42.226795    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:45:42.239045    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:45:42.239054    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:45:42.244189    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:45:42.244205    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:45:42.281130    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:45:42.281143    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:45:42.293512    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:45:42.293526    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:45:42.311023    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:45:42.311033    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:45:42.329171    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:45:42.329184    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:45:42.341784    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:45:42.341795    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:45:42.357723    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:45:42.357736    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:45:42.369655    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:45:42.369667    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:45:42.391767    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:45:42.391778    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:45:44.918297    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:45:49.921113    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:45:49.921568    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:45:49.969654    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:45:49.969779    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:45:49.989804    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:45:49.989881    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:45:50.003916    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:45:50.003999    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:45:50.016250    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:45:50.016317    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:45:50.031138    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:45:50.031213    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:45:50.041739    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:45:50.041800    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:45:50.052582    4671 logs.go:276] 0 containers: []
	W0729 10:45:50.052594    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:45:50.052653    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:45:50.063192    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:45:50.063209    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:45:50.063215    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:45:50.077273    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:45:50.077284    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:45:50.088799    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:45:50.088811    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:45:50.101172    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:45:50.101185    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:45:50.122259    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:45:50.122269    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:45:50.160637    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:45:50.160649    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:45:50.195250    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:45:50.195264    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:45:50.219674    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:45:50.219682    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:45:50.231030    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:45:50.231043    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:45:50.242267    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:45:50.242277    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:45:50.246502    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:45:50.246511    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:45:50.260296    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:45:50.260309    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:45:50.272131    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:45:50.272143    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:45:50.283872    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:45:50.283884    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:45:50.294944    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:45:50.294957    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:45:52.810836    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:45:57.813024    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:45:57.813392    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:45:57.852178    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:45:57.852296    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:45:57.873817    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:45:57.873936    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:45:57.889581    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:45:57.889655    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:45:57.902105    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:45:57.902168    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:45:57.913298    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:45:57.913359    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:45:57.923585    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:45:57.923654    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:45:57.935133    4671 logs.go:276] 0 containers: []
	W0729 10:45:57.935145    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:45:57.935201    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:45:57.945924    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:45:57.945943    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:45:57.945948    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:45:57.957853    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:45:57.957864    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:45:57.969724    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:45:57.969735    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:45:58.005974    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:45:58.005982    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:45:58.040449    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:45:58.040461    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:45:58.052581    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:45:58.052595    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:45:58.067461    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:45:58.067473    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:45:58.085076    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:45:58.085087    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:45:58.109926    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:45:58.109933    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:45:58.123734    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:45:58.123744    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:45:58.136142    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:45:58.136155    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:45:58.148432    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:45:58.148444    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:45:58.152735    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:45:58.152744    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:45:58.176244    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:45:58.176256    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:45:58.187896    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:45:58.187907    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:46:00.702006    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:46:05.704518    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:46:05.704594    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:46:05.716243    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:46:05.716295    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:46:05.727507    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:46:05.727568    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:46:05.739404    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:46:05.739462    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:46:05.751507    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:46:05.751564    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:46:05.762016    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:46:05.762072    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:46:05.777720    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:46:05.777781    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:46:05.788973    4671 logs.go:276] 0 containers: []
	W0729 10:46:05.788983    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:46:05.789024    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:46:05.801028    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:46:05.801044    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:46:05.801049    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:46:05.815231    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:46:05.815246    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:46:05.827408    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:46:05.827420    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:46:05.843478    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:46:05.843490    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:46:05.862070    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:46:05.862082    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:46:05.874535    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:46:05.874548    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:46:05.912550    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:46:05.912570    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:46:05.917388    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:46:05.917396    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:46:05.942558    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:46:05.942573    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:46:05.956477    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:46:05.956487    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:46:05.981694    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:46:05.981708    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:46:06.020194    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:46:06.020206    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:46:06.032227    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:46:06.032240    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:46:06.045076    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:46:06.045088    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:46:06.057826    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:46:06.057838    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:46:08.571365    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:46:13.573627    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:46:13.574043    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:46:13.611139    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:46:13.611260    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:46:13.633416    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:46:13.633507    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:46:13.648125    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:46:13.648191    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:46:13.666996    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:46:13.667061    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:46:13.677037    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:46:13.677095    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:46:13.687325    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:46:13.687393    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:46:13.697947    4671 logs.go:276] 0 containers: []
	W0729 10:46:13.697960    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:46:13.698020    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:46:13.708891    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:46:13.708908    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:46:13.708913    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:46:13.720814    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:46:13.720826    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:46:13.732384    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:46:13.732397    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:46:13.744639    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:46:13.744652    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:46:13.749390    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:46:13.749396    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:46:13.764658    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:46:13.764671    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:46:13.775916    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:46:13.775929    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:46:13.787557    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:46:13.787570    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:46:13.823857    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:46:13.823866    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:46:13.841028    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:46:13.841038    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:46:13.864215    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:46:13.864221    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:46:13.898250    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:46:13.898264    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:46:13.911397    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:46:13.911408    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:46:13.925674    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:46:13.925688    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:46:13.938753    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:46:13.938767    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:46:16.455478    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:46:21.458129    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:46:21.458374    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:46:21.486849    4671 logs.go:276] 1 containers: [ad43bf81c8bd]
	I0729 10:46:21.486963    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:46:21.502156    4671 logs.go:276] 1 containers: [ce2e05574b7b]
	I0729 10:46:21.502239    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:46:21.516194    4671 logs.go:276] 4 containers: [511b18bc53cf 66f1b6af6209 38e59427cdd0 9079d201f3f1]
	I0729 10:46:21.516267    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:46:21.526611    4671 logs.go:276] 1 containers: [a39d04da7636]
	I0729 10:46:21.526679    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:46:21.536656    4671 logs.go:276] 1 containers: [d66c9dfeedc7]
	I0729 10:46:21.536720    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:46:21.547122    4671 logs.go:276] 1 containers: [952d5e415c7a]
	I0729 10:46:21.547188    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:46:21.557949    4671 logs.go:276] 0 containers: []
	W0729 10:46:21.557962    4671 logs.go:278] No container was found matching "kindnet"
	I0729 10:46:21.558017    4671 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:46:21.567775    4671 logs.go:276] 1 containers: [56653d89d6ec]
	I0729 10:46:21.567792    4671 logs.go:123] Gathering logs for kube-scheduler [a39d04da7636] ...
	I0729 10:46:21.567798    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39d04da7636"
	I0729 10:46:21.582252    4671 logs.go:123] Gathering logs for kubelet ...
	I0729 10:46:21.582264    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:46:21.618396    4671 logs.go:123] Gathering logs for dmesg ...
	I0729 10:46:21.618403    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:46:21.622405    4671 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:46:21.622412    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:46:21.655788    4671 logs.go:123] Gathering logs for coredns [66f1b6af6209] ...
	I0729 10:46:21.655802    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f1b6af6209"
	I0729 10:46:21.667699    4671 logs.go:123] Gathering logs for coredns [38e59427cdd0] ...
	I0729 10:46:21.667712    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38e59427cdd0"
	I0729 10:46:21.679163    4671 logs.go:123] Gathering logs for storage-provisioner [56653d89d6ec] ...
	I0729 10:46:21.679174    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56653d89d6ec"
	I0729 10:46:21.690629    4671 logs.go:123] Gathering logs for kube-apiserver [ad43bf81c8bd] ...
	I0729 10:46:21.690640    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad43bf81c8bd"
	I0729 10:46:21.705169    4671 logs.go:123] Gathering logs for etcd [ce2e05574b7b] ...
	I0729 10:46:21.705181    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce2e05574b7b"
	I0729 10:46:21.718605    4671 logs.go:123] Gathering logs for kube-controller-manager [952d5e415c7a] ...
	I0729 10:46:21.718616    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952d5e415c7a"
	I0729 10:46:21.735375    4671 logs.go:123] Gathering logs for coredns [511b18bc53cf] ...
	I0729 10:46:21.735389    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 511b18bc53cf"
	I0729 10:46:21.746804    4671 logs.go:123] Gathering logs for coredns [9079d201f3f1] ...
	I0729 10:46:21.746816    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9079d201f3f1"
	I0729 10:46:21.759312    4671 logs.go:123] Gathering logs for kube-proxy [d66c9dfeedc7] ...
	I0729 10:46:21.759325    4671 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d66c9dfeedc7"
	I0729 10:46:21.771043    4671 logs.go:123] Gathering logs for Docker ...
	I0729 10:46:21.771056    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:46:21.794870    4671 logs.go:123] Gathering logs for container status ...
	I0729 10:46:21.794880    4671 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:46:24.309315    4671 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:46:29.310665    4671 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:46:29.313925    4671 out.go:177] 
	W0729 10:46:29.317978    4671 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 10:46:29.317986    4671 out.go:239] * 
	* 
	W0729 10:46:29.318458    4671 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:46:29.333971    4671 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-396000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (570.35s)

                                                
                                    
x
+
TestPause/serial/Start (9.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-830000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0729 10:44:17.383328    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-830000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.80946325s)

                                                
                                                
-- stdout --
	* [pause-830000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-830000" primary control-plane node in "pause-830000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-830000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-830000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-830000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-830000 -n pause-830000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-830000 -n pause-830000: exit status 7 (67.670917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-830000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-615000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-615000 --driver=qemu2 : exit status 80 (9.843227958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-615000" primary control-plane node in "NoKubernetes-615000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-615000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-615000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-615000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-615000 -n NoKubernetes-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-615000 -n NoKubernetes-615000: exit status 7 (53.325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-615000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-615000 --no-kubernetes --driver=qemu2 : exit status 80 (5.240465583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-615000
	* Restarting existing qemu2 VM for "NoKubernetes-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-615000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-615000 -n NoKubernetes-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-615000 -n NoKubernetes-615000: exit status 7 (54.823208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-615000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-615000 --no-kubernetes --driver=qemu2 : exit status 80 (5.237841708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-615000
	* Restarting existing qemu2 VM for "NoKubernetes-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-615000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-615000 -n NoKubernetes-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-615000 -n NoKubernetes-615000: exit status 7 (57.880458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-615000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-615000 --driver=qemu2 : exit status 80 (5.280884625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-615000
	* Restarting existing qemu2 VM for "NoKubernetes-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-615000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-615000 -n NoKubernetes-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-615000 -n NoKubernetes-615000: exit status 7 (31.445167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.845533792s)

                                                
                                                
-- stdout --
	* [auto-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-783000" primary control-plane node in "auto-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:18.756381    4977 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:18.756544    4977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:18.756548    4977 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:18.756550    4977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:18.756668    4977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:45:18.757864    4977 out.go:298] Setting JSON to false
	I0729 10:45:18.775353    4977 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4482,"bootTime":1722270636,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:45:18.775434    4977 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:45:18.780735    4977 out.go:177] * [auto-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:45:18.788745    4977 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:45:18.788775    4977 notify.go:220] Checking for updates...
	I0729 10:45:18.795733    4977 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:45:18.798719    4977 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:45:18.801659    4977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:18.804697    4977 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:45:18.807731    4977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:45:18.811126    4977 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:18.811209    4977 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:45:18.811264    4977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:45:18.814700    4977 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:45:18.821645    4977 start.go:297] selected driver: qemu2
	I0729 10:45:18.821657    4977 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:45:18.821663    4977 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:45:18.823972    4977 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:45:18.826653    4977 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:45:18.829828    4977 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:45:18.829862    4977 cni.go:84] Creating CNI manager for ""
	I0729 10:45:18.829870    4977 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:45:18.829878    4977 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:45:18.829918    4977 start.go:340] cluster config:
	{Name:auto-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:18.833820    4977 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:18.840667    4977 out.go:177] * Starting "auto-783000" primary control-plane node in "auto-783000" cluster
	I0729 10:45:18.844742    4977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:45:18.844765    4977 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:45:18.844772    4977 cache.go:56] Caching tarball of preloaded images
	I0729 10:45:18.844854    4977 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:45:18.844867    4977 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:45:18.844945    4977 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/auto-783000/config.json ...
	I0729 10:45:18.844955    4977 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/auto-783000/config.json: {Name:mkdb3f53c9ef5299783fc4aeb865f74fc83b38e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:45:18.845252    4977 start.go:360] acquireMachinesLock for auto-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:18.845282    4977 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "auto-783000"
	I0729 10:45:18.845293    4977 start.go:93] Provisioning new machine with config: &{Name:auto-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:45:18.845327    4977 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:45:18.849726    4977 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:45:18.864863    4977 start.go:159] libmachine.API.Create for "auto-783000" (driver="qemu2")
	I0729 10:45:18.864887    4977 client.go:168] LocalClient.Create starting
	I0729 10:45:18.864948    4977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:45:18.864980    4977 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:18.864989    4977 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:18.865026    4977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:45:18.865050    4977 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:18.865059    4977 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:18.865453    4977 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:45:19.019867    4977 main.go:141] libmachine: Creating SSH key...
	I0729 10:45:19.135909    4977 main.go:141] libmachine: Creating Disk image...
	I0729 10:45:19.135920    4977 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:45:19.136122    4977 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2
	I0729 10:45:19.145773    4977 main.go:141] libmachine: STDOUT: 
	I0729 10:45:19.145798    4977 main.go:141] libmachine: STDERR: 
	I0729 10:45:19.145842    4977 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2 +20000M
	I0729 10:45:19.153899    4977 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:45:19.153913    4977 main.go:141] libmachine: STDERR: 
	I0729 10:45:19.153933    4977 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2
	I0729 10:45:19.153939    4977 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:45:19.153951    4977 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:19.153979    4977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:3e:3c:99:27:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2
	I0729 10:45:19.155618    4977 main.go:141] libmachine: STDOUT: 
	I0729 10:45:19.155632    4977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:19.155648    4977 client.go:171] duration metric: took 290.765125ms to LocalClient.Create
	I0729 10:45:21.157690    4977 start.go:128] duration metric: took 2.312420333s to createHost
	I0729 10:45:21.157736    4977 start.go:83] releasing machines lock for "auto-783000", held for 2.312518583s
	W0729 10:45:21.157764    4977 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:21.167709    4977 out.go:177] * Deleting "auto-783000" in qemu2 ...
	W0729 10:45:21.184156    4977 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:21.184167    4977 start.go:729] Will try again in 5 seconds ...
	I0729 10:45:26.184809    4977 start.go:360] acquireMachinesLock for auto-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:26.185435    4977 start.go:364] duration metric: took 475.292µs to acquireMachinesLock for "auto-783000"
	I0729 10:45:26.185591    4977 start.go:93] Provisioning new machine with config: &{Name:auto-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:45:26.185873    4977 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:45:26.197595    4977 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:45:26.249509    4977 start.go:159] libmachine.API.Create for "auto-783000" (driver="qemu2")
	I0729 10:45:26.249564    4977 client.go:168] LocalClient.Create starting
	I0729 10:45:26.249686    4977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:45:26.249755    4977 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:26.249779    4977 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:26.249860    4977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:45:26.249905    4977 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:26.249920    4977 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:26.250441    4977 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:45:26.413454    4977 main.go:141] libmachine: Creating SSH key...
	I0729 10:45:26.509913    4977 main.go:141] libmachine: Creating Disk image...
	I0729 10:45:26.509923    4977 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:45:26.510165    4977 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2
	I0729 10:45:26.520428    4977 main.go:141] libmachine: STDOUT: 
	I0729 10:45:26.520449    4977 main.go:141] libmachine: STDERR: 
	I0729 10:45:26.520523    4977 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2 +20000M
	I0729 10:45:26.529473    4977 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:45:26.529497    4977 main.go:141] libmachine: STDERR: 
	I0729 10:45:26.529512    4977 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2
	I0729 10:45:26.529516    4977 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:45:26.529522    4977 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:26.529548    4977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:3d:bd:1f:59:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/auto-783000/disk.qcow2
	I0729 10:45:26.531434    4977 main.go:141] libmachine: STDOUT: 
	I0729 10:45:26.531449    4977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:26.531470    4977 client.go:171] duration metric: took 281.910416ms to LocalClient.Create
	I0729 10:45:28.533161    4977 start.go:128] duration metric: took 2.347322375s to createHost
	I0729 10:45:28.533215    4977 start.go:83] releasing machines lock for "auto-783000", held for 2.347830541s
	W0729 10:45:28.533372    4977 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:28.546830    4977 out.go:177] 
	W0729 10:45:28.549851    4977 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:45:28.549862    4977 out.go:239] * 
	* 
	W0729 10:45:28.550908    4977 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:45:28.562253    4977 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.975676458s)

                                                
                                                
-- stdout --
	* [kindnet-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-783000" primary control-plane node in "kindnet-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:30.670180    5086 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:30.670318    5086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:30.670322    5086 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:30.670325    5086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:30.670455    5086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:45:30.671750    5086 out.go:298] Setting JSON to false
	I0729 10:45:30.688245    5086 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4494,"bootTime":1722270636,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:45:30.688319    5086 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:45:30.694570    5086 out.go:177] * [kindnet-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:45:30.702520    5086 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:45:30.702552    5086 notify.go:220] Checking for updates...
	I0729 10:45:30.710504    5086 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:45:30.713528    5086 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:45:30.716571    5086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:30.719543    5086 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:45:30.722496    5086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:45:30.725911    5086 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:30.725980    5086 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:45:30.726035    5086 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:45:30.732482    5086 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:45:30.739472    5086 start.go:297] selected driver: qemu2
	I0729 10:45:30.739480    5086 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:45:30.739486    5086 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:45:30.741842    5086 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:45:30.744463    5086 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:45:30.747570    5086 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:45:30.747601    5086 cni.go:84] Creating CNI manager for "kindnet"
	I0729 10:45:30.747608    5086 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:45:30.747647    5086 start.go:340] cluster config:
	{Name:kindnet-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:30.751089    5086 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:30.758452    5086 out.go:177] * Starting "kindnet-783000" primary control-plane node in "kindnet-783000" cluster
	I0729 10:45:30.762510    5086 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:45:30.762526    5086 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:45:30.762543    5086 cache.go:56] Caching tarball of preloaded images
	I0729 10:45:30.762609    5086 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:45:30.762615    5086 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:45:30.762682    5086 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/kindnet-783000/config.json ...
	I0729 10:45:30.762700    5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/kindnet-783000/config.json: {Name:mkcb4984b6439885c60a17d9d72042e65d23186f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:45:30.763019    5086 start.go:360] acquireMachinesLock for kindnet-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:30.763050    5086 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "kindnet-783000"
	I0729 10:45:30.763062    5086 start.go:93] Provisioning new machine with config: &{Name:kindnet-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:45:30.763099    5086 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:45:30.767502    5086 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:45:30.782861    5086 start.go:159] libmachine.API.Create for "kindnet-783000" (driver="qemu2")
	I0729 10:45:30.782885    5086 client.go:168] LocalClient.Create starting
	I0729 10:45:30.782946    5086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:45:30.782976    5086 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:30.782986    5086 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:30.783022    5086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:45:30.783045    5086 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:30.783055    5086 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:30.783477    5086 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:45:30.937156    5086 main.go:141] libmachine: Creating SSH key...
	I0729 10:45:31.062285    5086 main.go:141] libmachine: Creating Disk image...
	I0729 10:45:31.062299    5086 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:45:31.062488    5086 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2
	I0729 10:45:31.072799    5086 main.go:141] libmachine: STDOUT: 
	I0729 10:45:31.072867    5086 main.go:141] libmachine: STDERR: 
	I0729 10:45:31.072986    5086 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2 +20000M
	I0729 10:45:31.082354    5086 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:45:31.082378    5086 main.go:141] libmachine: STDERR: 
	I0729 10:45:31.082403    5086 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2
	I0729 10:45:31.082408    5086 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:45:31.082420    5086 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:31.082462    5086 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:17:aa:1e:30:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2
	I0729 10:45:31.084536    5086 main.go:141] libmachine: STDOUT: 
	I0729 10:45:31.084558    5086 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:31.084581    5086 client.go:171] duration metric: took 301.700792ms to LocalClient.Create
	I0729 10:45:33.086827    5086 start.go:128] duration metric: took 2.323766375s to createHost
	I0729 10:45:33.086901    5086 start.go:83] releasing machines lock for "kindnet-783000", held for 2.323911625s
	W0729 10:45:33.086954    5086 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:33.105310    5086 out.go:177] * Deleting "kindnet-783000" in qemu2 ...
	W0729 10:45:33.132264    5086 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:33.132301    5086 start.go:729] Will try again in 5 seconds ...
	I0729 10:45:38.134287    5086 start.go:360] acquireMachinesLock for kindnet-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:38.134714    5086 start.go:364] duration metric: took 309.042µs to acquireMachinesLock for "kindnet-783000"
	I0729 10:45:38.134821    5086 start.go:93] Provisioning new machine with config: &{Name:kindnet-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:45:38.135135    5086 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:45:38.144687    5086 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:45:38.190216    5086 start.go:159] libmachine.API.Create for "kindnet-783000" (driver="qemu2")
	I0729 10:45:38.190268    5086 client.go:168] LocalClient.Create starting
	I0729 10:45:38.190400    5086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:45:38.190468    5086 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:38.190489    5086 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:38.190545    5086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:45:38.190587    5086 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:38.190600    5086 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:38.191112    5086 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:45:38.353798    5086 main.go:141] libmachine: Creating SSH key...
	I0729 10:45:38.556909    5086 main.go:141] libmachine: Creating Disk image...
	I0729 10:45:38.556919    5086 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:45:38.557172    5086 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2
	I0729 10:45:38.566969    5086 main.go:141] libmachine: STDOUT: 
	I0729 10:45:38.566989    5086 main.go:141] libmachine: STDERR: 
	I0729 10:45:38.567047    5086 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2 +20000M
	I0729 10:45:38.575408    5086 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:45:38.575424    5086 main.go:141] libmachine: STDERR: 
	I0729 10:45:38.575441    5086 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2
	I0729 10:45:38.575445    5086 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:45:38.575455    5086 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:38.575483    5086 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:68:a2:0a:4c:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kindnet-783000/disk.qcow2
	I0729 10:45:38.577254    5086 main.go:141] libmachine: STDOUT: 
	I0729 10:45:38.577271    5086 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:38.577283    5086 client.go:171] duration metric: took 387.018375ms to LocalClient.Create
	I0729 10:45:40.579399    5086 start.go:128] duration metric: took 2.444303958s to createHost
	I0729 10:45:40.579460    5086 start.go:83] releasing machines lock for "kindnet-783000", held for 2.444798709s
	W0729 10:45:40.579796    5086 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:40.588330    5086 out.go:177] 
	W0729 10:45:40.593266    5086 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:45:40.593293    5086 out.go:239] * 
	* 
	W0729 10:45:40.594602    5086 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:45:40.605208    5086 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.900481417s)

                                                
                                                
-- stdout --
	* [calico-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-783000" primary control-plane node in "calico-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:42.856992    5202 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:42.857125    5202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:42.857128    5202 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:42.857131    5202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:42.857274    5202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:45:42.858320    5202 out.go:298] Setting JSON to false
	I0729 10:45:42.874258    5202 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4506,"bootTime":1722270636,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:45:42.874334    5202 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:45:42.879828    5202 out.go:177] * [calico-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:45:42.887744    5202 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:45:42.887785    5202 notify.go:220] Checking for updates...
	I0729 10:45:42.894816    5202 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:45:42.897806    5202 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:45:42.900770    5202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:42.903772    5202 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:45:42.906711    5202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:45:42.910114    5202 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:42.910181    5202 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:45:42.910235    5202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:45:42.914747    5202 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:45:42.921767    5202 start.go:297] selected driver: qemu2
	I0729 10:45:42.921774    5202 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:45:42.921780    5202 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:45:42.924033    5202 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:45:42.926776    5202 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:45:42.929767    5202 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:45:42.929802    5202 cni.go:84] Creating CNI manager for "calico"
	I0729 10:45:42.929809    5202 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 10:45:42.929837    5202 start.go:340] cluster config:
	{Name:calico-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:42.933439    5202 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:42.940865    5202 out.go:177] * Starting "calico-783000" primary control-plane node in "calico-783000" cluster
	I0729 10:45:42.944787    5202 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:45:42.944804    5202 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:45:42.944823    5202 cache.go:56] Caching tarball of preloaded images
	I0729 10:45:42.944884    5202 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:45:42.944890    5202 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:45:42.944947    5202 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/calico-783000/config.json ...
	I0729 10:45:42.944963    5202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/calico-783000/config.json: {Name:mk68174a112617ca566637186f5ada10632b1138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:45:42.945292    5202 start.go:360] acquireMachinesLock for calico-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:42.945328    5202 start.go:364] duration metric: took 30.959µs to acquireMachinesLock for "calico-783000"
	I0729 10:45:42.945340    5202 start.go:93] Provisioning new machine with config: &{Name:calico-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:45:42.945364    5202 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:45:42.952751    5202 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:45:42.967456    5202 start.go:159] libmachine.API.Create for "calico-783000" (driver="qemu2")
	I0729 10:45:42.967476    5202 client.go:168] LocalClient.Create starting
	I0729 10:45:42.967539    5202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:45:42.967571    5202 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:42.967583    5202 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:42.967620    5202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:45:42.967642    5202 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:42.967649    5202 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:42.968128    5202 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:45:43.120275    5202 main.go:141] libmachine: Creating SSH key...
	I0729 10:45:43.310840    5202 main.go:141] libmachine: Creating Disk image...
	I0729 10:45:43.310853    5202 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:45:43.311047    5202 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2
	I0729 10:45:43.320695    5202 main.go:141] libmachine: STDOUT: 
	I0729 10:45:43.320715    5202 main.go:141] libmachine: STDERR: 
	I0729 10:45:43.320771    5202 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2 +20000M
	I0729 10:45:43.328831    5202 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:45:43.328846    5202 main.go:141] libmachine: STDERR: 
	I0729 10:45:43.328860    5202 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2
	I0729 10:45:43.328865    5202 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:45:43.328882    5202 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:43.328913    5202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:28:d0:66:80:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2
	I0729 10:45:43.330520    5202 main.go:141] libmachine: STDOUT: 
	I0729 10:45:43.330535    5202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:43.330555    5202 client.go:171] duration metric: took 363.085833ms to LocalClient.Create
	I0729 10:45:45.332702    5202 start.go:128] duration metric: took 2.387381875s to createHost
	I0729 10:45:45.332779    5202 start.go:83] releasing machines lock for "calico-783000", held for 2.387512917s
	W0729 10:45:45.332914    5202 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:45.340192    5202 out.go:177] * Deleting "calico-783000" in qemu2 ...
	W0729 10:45:45.373208    5202 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:45.373231    5202 start.go:729] Will try again in 5 seconds ...
	I0729 10:45:50.375207    5202 start.go:360] acquireMachinesLock for calico-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:50.375378    5202 start.go:364] duration metric: took 130.5µs to acquireMachinesLock for "calico-783000"
	I0729 10:45:50.375394    5202 start.go:93] Provisioning new machine with config: &{Name:calico-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:45:50.375461    5202 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:45:50.383718    5202 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:45:50.402986    5202 start.go:159] libmachine.API.Create for "calico-783000" (driver="qemu2")
	I0729 10:45:50.403016    5202 client.go:168] LocalClient.Create starting
	I0729 10:45:50.403093    5202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:45:50.403128    5202 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:50.403137    5202 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:50.403174    5202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:45:50.403199    5202 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:50.403205    5202 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:50.403655    5202 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:45:50.554528    5202 main.go:141] libmachine: Creating SSH key...
	I0729 10:45:50.663398    5202 main.go:141] libmachine: Creating Disk image...
	I0729 10:45:50.663405    5202 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:45:50.663590    5202 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2
	I0729 10:45:50.672972    5202 main.go:141] libmachine: STDOUT: 
	I0729 10:45:50.672993    5202 main.go:141] libmachine: STDERR: 
	I0729 10:45:50.673042    5202 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2 +20000M
	I0729 10:45:50.680933    5202 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:45:50.680948    5202 main.go:141] libmachine: STDERR: 
	I0729 10:45:50.680958    5202 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2
	I0729 10:45:50.680963    5202 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:45:50.680975    5202 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:50.681000    5202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:92:52:eb:0f:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/calico-783000/disk.qcow2
	I0729 10:45:50.682603    5202 main.go:141] libmachine: STDOUT: 
	I0729 10:45:50.682616    5202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:50.682628    5202 client.go:171] duration metric: took 279.616209ms to LocalClient.Create
	I0729 10:45:52.684784    5202 start.go:128] duration metric: took 2.309352125s to createHost
	I0729 10:45:52.684876    5202 start.go:83] releasing machines lock for "calico-783000", held for 2.309555083s
	W0729 10:45:52.685289    5202 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:52.699923    5202 out.go:177] 
	W0729 10:45:52.702980    5202 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:45:52.703001    5202 out.go:239] * 
	* 
	W0729 10:45:52.704604    5202 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:45:52.716919    5202 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
E0729 10:46:03.512616    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.736364292s)

                                                
                                                
-- stdout --
	* [custom-flannel-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-783000" primary control-plane node in "custom-flannel-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:55.115106    5324 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:55.115246    5324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:55.115249    5324 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:55.115252    5324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:55.115378    5324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:45:55.116389    5324 out.go:298] Setting JSON to false
	I0729 10:45:55.132843    5324 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4519,"bootTime":1722270636,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:45:55.132912    5324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:45:55.138352    5324 out.go:177] * [custom-flannel-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:45:55.146248    5324 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:45:55.146311    5324 notify.go:220] Checking for updates...
	I0729 10:45:55.153284    5324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:45:55.156244    5324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:45:55.159311    5324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:55.162338    5324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:45:55.165261    5324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:45:55.168650    5324 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:55.168712    5324 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:45:55.168758    5324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:45:55.173181    5324 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:45:55.180248    5324 start.go:297] selected driver: qemu2
	I0729 10:45:55.180256    5324 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:45:55.180261    5324 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:45:55.182362    5324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:45:55.185212    5324 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:45:55.188366    5324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:45:55.188389    5324 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 10:45:55.188409    5324 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 10:45:55.188453    5324 start.go:340] cluster config:
	{Name:custom-flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:55.191782    5324 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:55.197296    5324 out.go:177] * Starting "custom-flannel-783000" primary control-plane node in "custom-flannel-783000" cluster
	I0729 10:45:55.201258    5324 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:45:55.201272    5324 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:45:55.201288    5324 cache.go:56] Caching tarball of preloaded images
	I0729 10:45:55.201357    5324 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:45:55.201363    5324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:45:55.201424    5324 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/custom-flannel-783000/config.json ...
	I0729 10:45:55.201436    5324 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/custom-flannel-783000/config.json: {Name:mk63de321f86f8df57c3505e5a190c40e3352cc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:45:55.201772    5324 start.go:360] acquireMachinesLock for custom-flannel-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:55.201804    5324 start.go:364] duration metric: took 26µs to acquireMachinesLock for "custom-flannel-783000"
	I0729 10:45:55.201815    5324 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:45:55.201850    5324 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:45:55.209260    5324 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:45:55.224892    5324 start.go:159] libmachine.API.Create for "custom-flannel-783000" (driver="qemu2")
	I0729 10:45:55.224917    5324 client.go:168] LocalClient.Create starting
	I0729 10:45:55.224985    5324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:45:55.225013    5324 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:55.225022    5324 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:55.225059    5324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:45:55.225082    5324 main.go:141] libmachine: Decoding PEM data...
	I0729 10:45:55.225089    5324 main.go:141] libmachine: Parsing certificate...
	I0729 10:45:55.225533    5324 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:45:55.379832    5324 main.go:141] libmachine: Creating SSH key...
	I0729 10:45:55.434876    5324 main.go:141] libmachine: Creating Disk image...
	I0729 10:45:55.434881    5324 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:45:55.435060    5324 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2
	I0729 10:45:55.444405    5324 main.go:141] libmachine: STDOUT: 
	I0729 10:45:55.444424    5324 main.go:141] libmachine: STDERR: 
	I0729 10:45:55.444465    5324 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2 +20000M
	I0729 10:45:55.452376    5324 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:45:55.452395    5324 main.go:141] libmachine: STDERR: 
	I0729 10:45:55.452412    5324 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2
	I0729 10:45:55.452418    5324 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:45:55.452427    5324 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:55.452456    5324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:05:1f:1b:70:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2
	I0729 10:45:55.454042    5324 main.go:141] libmachine: STDOUT: 
	I0729 10:45:55.454053    5324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:55.454071    5324 client.go:171] duration metric: took 229.156208ms to LocalClient.Create
	I0729 10:45:57.456179    5324 start.go:128] duration metric: took 2.254368083s to createHost
	I0729 10:45:57.456234    5324 start.go:83] releasing machines lock for "custom-flannel-783000", held for 2.254491042s
	W0729 10:45:57.456333    5324 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:57.469194    5324 out.go:177] * Deleting "custom-flannel-783000" in qemu2 ...
	W0729 10:45:57.494557    5324 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:57.494595    5324 start.go:729] Will try again in 5 seconds ...
	I0729 10:46:02.496670    5324 start.go:360] acquireMachinesLock for custom-flannel-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:02.497253    5324 start.go:364] duration metric: took 457.375µs to acquireMachinesLock for "custom-flannel-783000"
	I0729 10:46:02.497439    5324 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:02.497788    5324 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:02.506521    5324 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:02.556871    5324 start.go:159] libmachine.API.Create for "custom-flannel-783000" (driver="qemu2")
	I0729 10:46:02.556933    5324 client.go:168] LocalClient.Create starting
	I0729 10:46:02.557057    5324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:02.557119    5324 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:02.557136    5324 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:02.557198    5324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:02.557250    5324 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:02.557260    5324 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:02.557762    5324 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:02.721487    5324 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:02.756701    5324 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:02.756706    5324 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:02.756896    5324 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2
	I0729 10:46:02.766452    5324 main.go:141] libmachine: STDOUT: 
	I0729 10:46:02.766470    5324 main.go:141] libmachine: STDERR: 
	I0729 10:46:02.766520    5324 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2 +20000M
	I0729 10:46:02.774330    5324 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:02.774346    5324 main.go:141] libmachine: STDERR: 
	I0729 10:46:02.774361    5324 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2
	I0729 10:46:02.774364    5324 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:02.774375    5324 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:02.774414    5324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e7:cb:41:14:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/custom-flannel-783000/disk.qcow2
	I0729 10:46:02.776080    5324 main.go:141] libmachine: STDOUT: 
	I0729 10:46:02.776097    5324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:02.776116    5324 client.go:171] duration metric: took 219.177625ms to LocalClient.Create
	I0729 10:46:04.778266    5324 start.go:128] duration metric: took 2.280507875s to createHost
	I0729 10:46:04.778339    5324 start.go:83] releasing machines lock for "custom-flannel-783000", held for 2.281097292s
	W0729 10:46:04.778751    5324 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:04.788292    5324 out.go:177] 
	W0729 10:46:04.792373    5324 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:46:04.792420    5324 out.go:239] * 
	* 
	W0729 10:46:04.794879    5324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:46:04.802325    5324 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.881711416s)

                                                
                                                
-- stdout --
	* [false-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-783000" primary control-plane node in "false-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:07.222844    5442 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:07.222976    5442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:07.222979    5442 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:07.222982    5442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:07.223105    5442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:46:07.224150    5442 out.go:298] Setting JSON to false
	I0729 10:46:07.240260    5442 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4531,"bootTime":1722270636,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:46:07.240332    5442 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:46:07.246928    5442 out.go:177] * [false-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:46:07.255046    5442 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:46:07.255112    5442 notify.go:220] Checking for updates...
	I0729 10:46:07.261990    5442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:46:07.265028    5442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:46:07.268068    5442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:46:07.270993    5442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:46:07.274029    5442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:46:07.277266    5442 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:46:07.277336    5442 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:46:07.277384    5442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:46:07.282018    5442 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:46:07.288871    5442 start.go:297] selected driver: qemu2
	I0729 10:46:07.288878    5442 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:46:07.288884    5442 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:46:07.291115    5442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:46:07.293952    5442 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:46:07.297065    5442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:46:07.297111    5442 cni.go:84] Creating CNI manager for "false"
	I0729 10:46:07.297147    5442 start.go:340] cluster config:
	{Name:false-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:07.300669    5442 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:07.307912    5442 out.go:177] * Starting "false-783000" primary control-plane node in "false-783000" cluster
	I0729 10:46:07.311821    5442 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:46:07.311839    5442 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:46:07.311851    5442 cache.go:56] Caching tarball of preloaded images
	I0729 10:46:07.311918    5442 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:46:07.311925    5442 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:46:07.311992    5442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/false-783000/config.json ...
	I0729 10:46:07.312003    5442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/false-783000/config.json: {Name:mk88da73fd3436e911b408302441e3e3b21a5b0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:07.312213    5442 start.go:360] acquireMachinesLock for false-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:07.312246    5442 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "false-783000"
	I0729 10:46:07.312258    5442 start.go:93] Provisioning new machine with config: &{Name:false-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:07.312285    5442 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:07.320007    5442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:07.336666    5442 start.go:159] libmachine.API.Create for "false-783000" (driver="qemu2")
	I0729 10:46:07.336692    5442 client.go:168] LocalClient.Create starting
	I0729 10:46:07.336769    5442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:07.336799    5442 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:07.336808    5442 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:07.336847    5442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:07.336870    5442 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:07.336878    5442 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:07.337248    5442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:07.489615    5442 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:07.565051    5442 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:07.565071    5442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:07.565271    5442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2
	I0729 10:46:07.575312    5442 main.go:141] libmachine: STDOUT: 
	I0729 10:46:07.575337    5442 main.go:141] libmachine: STDERR: 
	I0729 10:46:07.575398    5442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2 +20000M
	I0729 10:46:07.583864    5442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:07.583891    5442 main.go:141] libmachine: STDERR: 
	I0729 10:46:07.583907    5442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2
	I0729 10:46:07.583914    5442 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:07.583925    5442 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:07.583952    5442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:69:3b:f3:5e:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2
	I0729 10:46:07.585661    5442 main.go:141] libmachine: STDOUT: 
	I0729 10:46:07.585676    5442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:07.585694    5442 client.go:171] duration metric: took 249.005083ms to LocalClient.Create
	I0729 10:46:09.587800    5442 start.go:128] duration metric: took 2.275564209s to createHost
	I0729 10:46:09.587874    5442 start.go:83] releasing machines lock for "false-783000", held for 2.275687541s
	W0729 10:46:09.587914    5442 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:09.597660    5442 out.go:177] * Deleting "false-783000" in qemu2 ...
	W0729 10:46:09.618184    5442 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:09.618201    5442 start.go:729] Will try again in 5 seconds ...
	I0729 10:46:14.620306    5442 start.go:360] acquireMachinesLock for false-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:14.620939    5442 start.go:364] duration metric: took 453.875µs to acquireMachinesLock for "false-783000"
	I0729 10:46:14.621023    5442 start.go:93] Provisioning new machine with config: &{Name:false-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:14.621341    5442 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:14.634030    5442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:14.683670    5442 start.go:159] libmachine.API.Create for "false-783000" (driver="qemu2")
	I0729 10:46:14.683726    5442 client.go:168] LocalClient.Create starting
	I0729 10:46:14.683851    5442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:14.683919    5442 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:14.683940    5442 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:14.683998    5442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:14.684049    5442 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:14.684064    5442 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:14.684921    5442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:14.847919    5442 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:15.014061    5442 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:15.014070    5442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:15.014259    5442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2
	I0729 10:46:15.023467    5442 main.go:141] libmachine: STDOUT: 
	I0729 10:46:15.023489    5442 main.go:141] libmachine: STDERR: 
	I0729 10:46:15.023537    5442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2 +20000M
	I0729 10:46:15.031364    5442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:15.031380    5442 main.go:141] libmachine: STDERR: 
	I0729 10:46:15.031391    5442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2
	I0729 10:46:15.031404    5442 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:15.031413    5442 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:15.031443    5442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:78:e4:cc:8f:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/false-783000/disk.qcow2
	I0729 10:46:15.033099    5442 main.go:141] libmachine: STDOUT: 
	I0729 10:46:15.033117    5442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:15.033131    5442 client.go:171] duration metric: took 349.411125ms to LocalClient.Create
	I0729 10:46:17.034815    5442 start.go:128] duration metric: took 2.413486583s to createHost
	I0729 10:46:17.034857    5442 start.go:83] releasing machines lock for "false-783000", held for 2.413961583s
	W0729 10:46:17.034947    5442 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:17.051715    5442 out.go:177] 
	W0729 10:46:17.056707    5442 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:46:17.056714    5442 out.go:239] * 
	* 
	W0729 10:46:17.057427    5442 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:46:17.069590    5442 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.984503s)

                                                
                                                
-- stdout --
	* [enable-default-cni-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-783000" primary control-plane node in "enable-default-cni-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:19.227755    5554 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:19.227883    5554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:19.227886    5554 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:19.227889    5554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:19.228021    5554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:46:19.229035    5554 out.go:298] Setting JSON to false
	I0729 10:46:19.245350    5554 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4543,"bootTime":1722270636,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:46:19.245420    5554 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:46:19.251295    5554 out.go:177] * [enable-default-cni-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:46:19.259147    5554 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:46:19.259283    5554 notify.go:220] Checking for updates...
	I0729 10:46:19.266342    5554 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:46:19.267565    5554 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:46:19.270356    5554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:46:19.273312    5554 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:46:19.276333    5554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:46:19.279724    5554 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:46:19.279788    5554 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:46:19.279838    5554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:46:19.284304    5554 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:46:19.291335    5554 start.go:297] selected driver: qemu2
	I0729 10:46:19.291345    5554 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:46:19.291359    5554 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:46:19.293446    5554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:46:19.296369    5554 out.go:177] * Automatically selected the socket_vmnet network
	E0729 10:46:19.299383    5554 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 10:46:19.299394    5554 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:46:19.299419    5554 cni.go:84] Creating CNI manager for "bridge"
	I0729 10:46:19.299425    5554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:46:19.299450    5554 start.go:340] cluster config:
	{Name:enable-default-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:19.302682    5554 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:19.310324    5554 out.go:177] * Starting "enable-default-cni-783000" primary control-plane node in "enable-default-cni-783000" cluster
	I0729 10:46:19.313326    5554 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:46:19.313340    5554 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:46:19.313350    5554 cache.go:56] Caching tarball of preloaded images
	I0729 10:46:19.313400    5554 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:46:19.313406    5554 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:46:19.313484    5554 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/enable-default-cni-783000/config.json ...
	I0729 10:46:19.313495    5554 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/enable-default-cni-783000/config.json: {Name:mkcea30a1866ad2cca92b104c60cfad31f5166fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:19.313700    5554 start.go:360] acquireMachinesLock for enable-default-cni-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:19.313731    5554 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "enable-default-cni-783000"
	I0729 10:46:19.313741    5554 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:19.313767    5554 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:19.320217    5554 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:19.335241    5554 start.go:159] libmachine.API.Create for "enable-default-cni-783000" (driver="qemu2")
	I0729 10:46:19.335264    5554 client.go:168] LocalClient.Create starting
	I0729 10:46:19.335329    5554 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:19.335359    5554 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:19.335367    5554 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:19.335403    5554 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:19.335425    5554 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:19.335431    5554 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:19.335764    5554 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:19.486884    5554 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:19.582870    5554 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:19.582876    5554 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:19.583066    5554 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I0729 10:46:19.592502    5554 main.go:141] libmachine: STDOUT: 
	I0729 10:46:19.592515    5554 main.go:141] libmachine: STDERR: 
	I0729 10:46:19.592565    5554 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2 +20000M
	I0729 10:46:19.600431    5554 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:19.600512    5554 main.go:141] libmachine: STDERR: 
	I0729 10:46:19.600531    5554 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I0729 10:46:19.600540    5554 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:19.600552    5554 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:19.600576    5554 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d1:46:11:30:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I0729 10:46:19.602201    5554 main.go:141] libmachine: STDOUT: 
	I0729 10:46:19.602215    5554 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:19.602235    5554 client.go:171] duration metric: took 266.974375ms to LocalClient.Create
	I0729 10:46:21.602854    5554 start.go:128] duration metric: took 2.289143959s to createHost
	I0729 10:46:21.602869    5554 start.go:83] releasing machines lock for "enable-default-cni-783000", held for 2.289203875s
	W0729 10:46:21.602883    5554 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:21.616444    5554 out.go:177] * Deleting "enable-default-cni-783000" in qemu2 ...
	W0729 10:46:21.626716    5554 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:21.626727    5554 start.go:729] Will try again in 5 seconds ...
	I0729 10:46:26.628739    5554 start.go:360] acquireMachinesLock for enable-default-cni-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:26.629223    5554 start.go:364] duration metric: took 397.417µs to acquireMachinesLock for "enable-default-cni-783000"
	I0729 10:46:26.629359    5554 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:26.629620    5554 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:26.637845    5554 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:26.677639    5554 start.go:159] libmachine.API.Create for "enable-default-cni-783000" (driver="qemu2")
	I0729 10:46:26.677718    5554 client.go:168] LocalClient.Create starting
	I0729 10:46:26.677889    5554 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:26.677963    5554 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:26.677979    5554 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:26.678039    5554 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:26.678080    5554 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:26.678090    5554 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:26.678682    5554 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:26.837116    5554 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:27.125131    5554 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:27.125142    5554 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:27.125324    5554 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I0729 10:46:27.135185    5554 main.go:141] libmachine: STDOUT: 
	I0729 10:46:27.135214    5554 main.go:141] libmachine: STDERR: 
	I0729 10:46:27.135290    5554 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2 +20000M
	I0729 10:46:27.143483    5554 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:27.143504    5554 main.go:141] libmachine: STDERR: 
	I0729 10:46:27.143518    5554 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I0729 10:46:27.143523    5554 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:27.143529    5554 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:27.143564    5554 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:9e:ef:ca:d1:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I0729 10:46:27.145296    5554 main.go:141] libmachine: STDOUT: 
	I0729 10:46:27.145310    5554 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:27.145323    5554 client.go:171] duration metric: took 467.600666ms to LocalClient.Create
	I0729 10:46:29.147386    5554 start.go:128] duration metric: took 2.5178215s to createHost
	I0729 10:46:29.147447    5554 start.go:83] releasing machines lock for "enable-default-cni-783000", held for 2.51827875s
	W0729 10:46:29.147730    5554 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:29.156626    5554 out.go:177] 
	W0729 10:46:29.160717    5554 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:46:29.160732    5554 out.go:239] * 
	* 
	W0729 10:46:29.161960    5554 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:46:29.171604    5554 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.857178875s)

                                                
                                                
-- stdout --
	* [flannel-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-783000" primary control-plane node in "flannel-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:31.519291    5667 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:31.519419    5667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:31.519422    5667 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:31.519424    5667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:31.519582    5667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:46:31.520621    5667 out.go:298] Setting JSON to false
	I0729 10:46:31.536752    5667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4555,"bootTime":1722270636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:46:31.536817    5667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:46:31.543406    5667 out.go:177] * [flannel-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:46:31.551331    5667 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:46:31.551383    5667 notify.go:220] Checking for updates...
	I0729 10:46:31.558238    5667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:46:31.561328    5667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:46:31.564325    5667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:46:31.567276    5667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:46:31.570337    5667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:46:31.573628    5667 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:46:31.573692    5667 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:46:31.573747    5667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:46:31.578271    5667 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:46:31.585361    5667 start.go:297] selected driver: qemu2
	I0729 10:46:31.585371    5667 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:46:31.585379    5667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:46:31.587450    5667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:46:31.590283    5667 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:46:31.593386    5667 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:46:31.593434    5667 cni.go:84] Creating CNI manager for "flannel"
	I0729 10:46:31.593439    5667 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 10:46:31.593475    5667 start.go:340] cluster config:
	{Name:flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:31.596679    5667 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:31.602304    5667 out.go:177] * Starting "flannel-783000" primary control-plane node in "flannel-783000" cluster
	I0729 10:46:31.606269    5667 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:46:31.606282    5667 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:46:31.606292    5667 cache.go:56] Caching tarball of preloaded images
	I0729 10:46:31.606360    5667 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:46:31.606366    5667 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:46:31.606433    5667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/flannel-783000/config.json ...
	I0729 10:46:31.606447    5667 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/flannel-783000/config.json: {Name:mkd944378fa59911759044314540e72f7bdc9627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:31.606668    5667 start.go:360] acquireMachinesLock for flannel-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:31.606699    5667 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "flannel-783000"
	I0729 10:46:31.606714    5667 start.go:93] Provisioning new machine with config: &{Name:flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:31.606739    5667 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:31.615314    5667 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:31.631069    5667 start.go:159] libmachine.API.Create for "flannel-783000" (driver="qemu2")
	I0729 10:46:31.631100    5667 client.go:168] LocalClient.Create starting
	I0729 10:46:31.631179    5667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:31.631208    5667 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:31.631218    5667 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:31.631259    5667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:31.631281    5667 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:31.631289    5667 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:31.631617    5667 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:31.783459    5667 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:31.882543    5667 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:31.882549    5667 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:31.882728    5667 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2
	I0729 10:46:31.891766    5667 main.go:141] libmachine: STDOUT: 
	I0729 10:46:31.891781    5667 main.go:141] libmachine: STDERR: 
	I0729 10:46:31.891830    5667 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2 +20000M
	I0729 10:46:31.899990    5667 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:31.900002    5667 main.go:141] libmachine: STDERR: 
	I0729 10:46:31.900017    5667 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2
	I0729 10:46:31.900027    5667 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:31.900041    5667 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:31.900067    5667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:30:08:b0:58:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2
	I0729 10:46:31.901737    5667 main.go:141] libmachine: STDOUT: 
	I0729 10:46:31.901749    5667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:31.901765    5667 client.go:171] duration metric: took 270.669042ms to LocalClient.Create
	I0729 10:46:33.903917    5667 start.go:128] duration metric: took 2.297216s to createHost
	I0729 10:46:33.904010    5667 start.go:83] releasing machines lock for "flannel-783000", held for 2.297370417s
	W0729 10:46:33.904067    5667 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:33.915387    5667 out.go:177] * Deleting "flannel-783000" in qemu2 ...
	W0729 10:46:33.940839    5667 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:33.940876    5667 start.go:729] Will try again in 5 seconds ...
	I0729 10:46:38.942073    5667 start.go:360] acquireMachinesLock for flannel-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:38.942639    5667 start.go:364] duration metric: took 441.417µs to acquireMachinesLock for "flannel-783000"
	I0729 10:46:38.942801    5667 start.go:93] Provisioning new machine with config: &{Name:flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:38.943005    5667 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:38.953718    5667 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:38.999729    5667 start.go:159] libmachine.API.Create for "flannel-783000" (driver="qemu2")
	I0729 10:46:38.999789    5667 client.go:168] LocalClient.Create starting
	I0729 10:46:38.999971    5667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:39.000040    5667 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:39.000059    5667 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:39.000132    5667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:39.000177    5667 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:39.000188    5667 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:39.000804    5667 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:39.188261    5667 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:39.289240    5667 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:39.289248    5667 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:39.289432    5667 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2
	I0729 10:46:39.298595    5667 main.go:141] libmachine: STDOUT: 
	I0729 10:46:39.298613    5667 main.go:141] libmachine: STDERR: 
	I0729 10:46:39.298667    5667 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2 +20000M
	I0729 10:46:39.306690    5667 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:39.306707    5667 main.go:141] libmachine: STDERR: 
	I0729 10:46:39.306726    5667 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2
	I0729 10:46:39.306731    5667 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:39.306743    5667 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:39.306770    5667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:c9:e1:fc:9c:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/flannel-783000/disk.qcow2
	I0729 10:46:39.308540    5667 main.go:141] libmachine: STDOUT: 
	I0729 10:46:39.308558    5667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:39.308570    5667 client.go:171] duration metric: took 308.786083ms to LocalClient.Create
	I0729 10:46:41.310692    5667 start.go:128] duration metric: took 2.367708125s to createHost
	I0729 10:46:41.310766    5667 start.go:83] releasing machines lock for "flannel-783000", held for 2.368172708s
	W0729 10:46:41.311021    5667 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:41.319435    5667 out.go:177] 
	W0729 10:46:41.323429    5667 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:46:41.323458    5667 out.go:239] * 
	* 
	W0729 10:46:41.325285    5667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:46:41.335496    5667 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.7857535s)

                                                
                                                
-- stdout --
	* [bridge-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-783000" primary control-plane node in "bridge-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:43.714791    5788 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:43.714925    5788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:43.714931    5788 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:43.714933    5788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:43.715062    5788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:46:43.716148    5788 out.go:298] Setting JSON to false
	I0729 10:46:43.732176    5788 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4567,"bootTime":1722270636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:46:43.732246    5788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:46:43.738194    5788 out.go:177] * [bridge-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:46:43.746018    5788 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:46:43.746060    5788 notify.go:220] Checking for updates...
	I0729 10:46:43.753024    5788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:46:43.756106    5788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:46:43.759115    5788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:46:43.762074    5788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:46:43.765056    5788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:46:43.768452    5788 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:46:43.768517    5788 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:46:43.768560    5788 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:46:43.772948    5788 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:46:43.780063    5788 start.go:297] selected driver: qemu2
	I0729 10:46:43.780074    5788 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:46:43.780081    5788 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:46:43.782391    5788 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:46:43.785047    5788 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:46:43.788111    5788 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:46:43.788149    5788 cni.go:84] Creating CNI manager for "bridge"
	I0729 10:46:43.788153    5788 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:46:43.788190    5788 start.go:340] cluster config:
	{Name:bridge-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:43.791838    5788 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:43.799008    5788 out.go:177] * Starting "bridge-783000" primary control-plane node in "bridge-783000" cluster
	I0729 10:46:43.802995    5788 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:46:43.803010    5788 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:46:43.803016    5788 cache.go:56] Caching tarball of preloaded images
	I0729 10:46:43.803067    5788 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:46:43.803073    5788 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:46:43.803124    5788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/bridge-783000/config.json ...
	I0729 10:46:43.803134    5788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/bridge-783000/config.json: {Name:mka099082790bb5f9d953c90b6ae809218871959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:43.803453    5788 start.go:360] acquireMachinesLock for bridge-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:43.803482    5788 start.go:364] duration metric: took 24.083µs to acquireMachinesLock for "bridge-783000"
	I0729 10:46:43.803492    5788 start.go:93] Provisioning new machine with config: &{Name:bridge-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:43.803526    5788 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:43.812098    5788 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:43.827002    5788 start.go:159] libmachine.API.Create for "bridge-783000" (driver="qemu2")
	I0729 10:46:43.827029    5788 client.go:168] LocalClient.Create starting
	I0729 10:46:43.827100    5788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:43.827132    5788 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:43.827140    5788 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:43.827180    5788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:43.827203    5788 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:43.827213    5788 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:43.827717    5788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:43.980350    5788 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:44.073216    5788 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:44.073224    5788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:44.073423    5788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2
	I0729 10:46:44.082855    5788 main.go:141] libmachine: STDOUT: 
	I0729 10:46:44.082872    5788 main.go:141] libmachine: STDERR: 
	I0729 10:46:44.082926    5788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2 +20000M
	I0729 10:46:44.090771    5788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:44.090800    5788 main.go:141] libmachine: STDERR: 
	I0729 10:46:44.090817    5788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2
	I0729 10:46:44.090823    5788 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:44.090840    5788 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:44.090865    5788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:b4:4c:da:a0:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2
	I0729 10:46:44.092444    5788 main.go:141] libmachine: STDOUT: 
	I0729 10:46:44.092464    5788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:44.092484    5788 client.go:171] duration metric: took 265.458792ms to LocalClient.Create
	I0729 10:46:46.094524    5788 start.go:128] duration metric: took 2.291052625s to createHost
	I0729 10:46:46.094568    5788 start.go:83] releasing machines lock for "bridge-783000", held for 2.291151s
	W0729 10:46:46.094586    5788 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:46.107112    5788 out.go:177] * Deleting "bridge-783000" in qemu2 ...
	W0729 10:46:46.123036    5788 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:46.123049    5788 start.go:729] Will try again in 5 seconds ...
	I0729 10:46:51.125096    5788 start.go:360] acquireMachinesLock for bridge-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:51.125510    5788 start.go:364] duration metric: took 337.25µs to acquireMachinesLock for "bridge-783000"
	I0729 10:46:51.125575    5788 start.go:93] Provisioning new machine with config: &{Name:bridge-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:51.125741    5788 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:51.134282    5788 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:51.171434    5788 start.go:159] libmachine.API.Create for "bridge-783000" (driver="qemu2")
	I0729 10:46:51.171481    5788 client.go:168] LocalClient.Create starting
	I0729 10:46:51.171595    5788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:51.171658    5788 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:51.171670    5788 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:51.171733    5788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:51.171775    5788 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:51.171791    5788 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:51.172235    5788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:51.327985    5788 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:51.413595    5788 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:51.413602    5788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:51.413789    5788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2
	I0729 10:46:51.423649    5788 main.go:141] libmachine: STDOUT: 
	I0729 10:46:51.423668    5788 main.go:141] libmachine: STDERR: 
	I0729 10:46:51.423724    5788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2 +20000M
	I0729 10:46:51.431846    5788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:51.431876    5788 main.go:141] libmachine: STDERR: 
	I0729 10:46:51.431888    5788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2
	I0729 10:46:51.431892    5788 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:51.431897    5788 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:51.431927    5788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:9e:41:05:75:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/bridge-783000/disk.qcow2
	I0729 10:46:51.433584    5788 main.go:141] libmachine: STDOUT: 
	I0729 10:46:51.433599    5788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:51.433619    5788 client.go:171] duration metric: took 262.134083ms to LocalClient.Create
	I0729 10:46:53.435736    5788 start.go:128] duration metric: took 2.310013958s to createHost
	I0729 10:46:53.435860    5788 start.go:83] releasing machines lock for "bridge-783000", held for 2.3103805s
	W0729 10:46:53.436156    5788 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:53.445712    5788 out.go:177] 
	W0729 10:46:53.450751    5788 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:46:53.450796    5788 out.go:239] * 
	* 
	W0729 10:46:53.452591    5788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:46:53.463657    5788 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (11.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (11.796135625s)

                                                
                                                
-- stdout --
	* [kubenet-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-783000" primary control-plane node in "kubenet-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:55.633648    5906 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:55.633779    5906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:55.633783    5906 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:55.633786    5906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:55.633920    5906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:46:55.634991    5906 out.go:298] Setting JSON to false
	I0729 10:46:55.650954    5906 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4579,"bootTime":1722270636,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:46:55.651040    5906 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:46:55.656581    5906 out.go:177] * [kubenet-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:46:55.664502    5906 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:46:55.664611    5906 notify.go:220] Checking for updates...
	I0729 10:46:55.671552    5906 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:46:55.674622    5906 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:46:55.677563    5906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:46:55.680580    5906 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:46:55.683557    5906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:46:55.686958    5906 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:46:55.687021    5906 config.go:182] Loaded profile config "stopped-upgrade-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:46:55.687072    5906 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:46:55.691564    5906 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:46:55.698490    5906 start.go:297] selected driver: qemu2
	I0729 10:46:55.698497    5906 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:46:55.698503    5906 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:46:55.700543    5906 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:46:55.703529    5906 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:46:55.706561    5906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:46:55.706605    5906 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 10:46:55.706640    5906 start.go:340] cluster config:
	{Name:kubenet-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:55.710018    5906 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:55.718425    5906 out.go:177] * Starting "kubenet-783000" primary control-plane node in "kubenet-783000" cluster
	I0729 10:46:55.722518    5906 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:46:55.722533    5906 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:46:55.722544    5906 cache.go:56] Caching tarball of preloaded images
	I0729 10:46:55.722601    5906 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:46:55.722607    5906 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:46:55.722673    5906 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/kubenet-783000/config.json ...
	I0729 10:46:55.722684    5906 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/kubenet-783000/config.json: {Name:mkb8325c6ee376115d4f8d1596ef4e5cd0715a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:55.722893    5906 start.go:360] acquireMachinesLock for kubenet-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:55.722923    5906 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "kubenet-783000"
	I0729 10:46:55.722935    5906 start.go:93] Provisioning new machine with config: &{Name:kubenet-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:55.722959    5906 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:55.731548    5906 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:46:55.747008    5906 start.go:159] libmachine.API.Create for "kubenet-783000" (driver="qemu2")
	I0729 10:46:55.747040    5906 client.go:168] LocalClient.Create starting
	I0729 10:46:55.747111    5906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:46:55.747141    5906 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:55.747150    5906 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:55.747193    5906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:46:55.747216    5906 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:55.747226    5906 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:55.747606    5906 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:46:55.902366    5906 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:55.962978    5906 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:55.962985    5906 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:55.963169    5906 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2
	I0729 10:46:55.972505    5906 main.go:141] libmachine: STDOUT: 
	I0729 10:46:55.972526    5906 main.go:141] libmachine: STDERR: 
	I0729 10:46:55.972577    5906 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2 +20000M
	I0729 10:46:55.980481    5906 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:55.980496    5906 main.go:141] libmachine: STDERR: 
	I0729 10:46:55.980514    5906 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2
	I0729 10:46:55.980519    5906 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:55.980534    5906 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:55.980558    5906 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:18:aa:d1:1c:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2
	I0729 10:46:55.982227    5906 main.go:141] libmachine: STDOUT: 
	I0729 10:46:55.982247    5906 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:55.982266    5906 client.go:171] duration metric: took 235.227917ms to LocalClient.Create
	I0729 10:46:57.984318    5906 start.go:128] duration metric: took 2.261414708s to createHost
	I0729 10:46:57.984362    5906 start.go:83] releasing machines lock for "kubenet-783000", held for 2.26149375s
	W0729 10:46:57.984418    5906 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:57.994372    5906 out.go:177] * Deleting "kubenet-783000" in qemu2 ...
	W0729 10:46:58.016506    5906 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:58.016524    5906 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:03.018502    5906 start.go:360] acquireMachinesLock for kubenet-783000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:04.944220    5906 start.go:364] duration metric: took 1.925717959s to acquireMachinesLock for "kubenet-783000"
	I0729 10:47:04.944380    5906 start.go:93] Provisioning new machine with config: &{Name:kubenet-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:04.944671    5906 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:04.950985    5906 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:47:05.000200    5906 start.go:159] libmachine.API.Create for "kubenet-783000" (driver="qemu2")
	I0729 10:47:05.000250    5906 client.go:168] LocalClient.Create starting
	I0729 10:47:05.000361    5906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:05.000425    5906 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:05.000440    5906 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:05.000504    5906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:05.000548    5906 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:05.000571    5906 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:05.001411    5906 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:05.167118    5906 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:05.331691    5906 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:05.331700    5906 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:05.331915    5906 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2
	I0729 10:47:05.341852    5906 main.go:141] libmachine: STDOUT: 
	I0729 10:47:05.341871    5906 main.go:141] libmachine: STDERR: 
	I0729 10:47:05.341919    5906 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2 +20000M
	I0729 10:47:05.349836    5906 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:05.349853    5906 main.go:141] libmachine: STDERR: 
	I0729 10:47:05.349872    5906 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2
	I0729 10:47:05.349879    5906 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:05.349885    5906 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:05.349909    5906 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:87:3f:73:fa:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/kubenet-783000/disk.qcow2
	I0729 10:47:05.351487    5906 main.go:141] libmachine: STDOUT: 
	I0729 10:47:05.351503    5906 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:05.351515    5906 client.go:171] duration metric: took 351.269834ms to LocalClient.Create
	I0729 10:47:07.351915    5906 start.go:128] duration metric: took 2.407271s to createHost
	I0729 10:47:07.351972    5906 start.go:83] releasing machines lock for "kubenet-783000", held for 2.407761084s
	W0729 10:47:07.352316    5906 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:07.369088    5906 out.go:177] 
	W0729 10:47:07.375013    5906 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:07.375040    5906 out.go:239] * 
	* 
	W0729 10:47:07.377763    5906 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:07.386741    5906 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (11.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-670000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-670000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (11.836821917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-670000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-670000" primary control-plane node in "old-k8s-version-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:02.452599    5921 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:02.452743    5921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:02.452747    5921 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:02.452750    5921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:02.452880    5921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:02.453866    5921 out.go:298] Setting JSON to false
	I0729 10:47:02.470134    5921 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4586,"bootTime":1722270636,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:02.470228    5921 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:02.474574    5921 out.go:177] * [old-k8s-version-670000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:02.481697    5921 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:02.481742    5921 notify.go:220] Checking for updates...
	I0729 10:47:02.488617    5921 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:02.491616    5921 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:02.494522    5921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:02.497580    5921 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:02.500607    5921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:02.503896    5921 config.go:182] Loaded profile config "kubenet-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:02.503966    5921 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:02.504017    5921 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:02.508527    5921 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:47:02.515591    5921 start.go:297] selected driver: qemu2
	I0729 10:47:02.515601    5921 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:47:02.515614    5921 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:02.518231    5921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:47:02.521638    5921 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:47:02.524631    5921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:47:02.524668    5921 cni.go:84] Creating CNI manager for ""
	I0729 10:47:02.524675    5921 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 10:47:02.524711    5921 start.go:340] cluster config:
	{Name:old-k8s-version-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:02.528727    5921 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:02.536602    5921 out.go:177] * Starting "old-k8s-version-670000" primary control-plane node in "old-k8s-version-670000" cluster
	I0729 10:47:02.540361    5921 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:47:02.540374    5921 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 10:47:02.540383    5921 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:02.540445    5921 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:02.540450    5921 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 10:47:02.540501    5921 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/old-k8s-version-670000/config.json ...
	I0729 10:47:02.540511    5921 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/old-k8s-version-670000/config.json: {Name:mk032cd9267f6fbb80207d0898ad752b33623f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:47:02.540720    5921 start.go:360] acquireMachinesLock for old-k8s-version-670000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:02.540755    5921 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "old-k8s-version-670000"
	I0729 10:47:02.540767    5921 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:02.540804    5921 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:02.548577    5921 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:02.565841    5921 start.go:159] libmachine.API.Create for "old-k8s-version-670000" (driver="qemu2")
	I0729 10:47:02.565879    5921 client.go:168] LocalClient.Create starting
	I0729 10:47:02.565936    5921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:02.565963    5921 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:02.565972    5921 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:02.566012    5921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:02.566034    5921 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:02.566040    5921 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:02.566457    5921 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:02.726378    5921 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:02.922007    5921 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:02.922013    5921 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:02.922224    5921 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2
	I0729 10:47:02.932098    5921 main.go:141] libmachine: STDOUT: 
	I0729 10:47:02.932113    5921 main.go:141] libmachine: STDERR: 
	I0729 10:47:02.932162    5921 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2 +20000M
	I0729 10:47:02.940150    5921 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:02.940161    5921 main.go:141] libmachine: STDERR: 
	I0729 10:47:02.940179    5921 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2
	I0729 10:47:02.940183    5921 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:02.940195    5921 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:02.940233    5921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:75:d4:d8:6c:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2
	I0729 10:47:02.941859    5921 main.go:141] libmachine: STDOUT: 
	I0729 10:47:02.941871    5921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:02.941889    5921 client.go:171] duration metric: took 376.0175ms to LocalClient.Create
	I0729 10:47:04.944014    5921 start.go:128] duration metric: took 2.403260875s to createHost
	I0729 10:47:04.944073    5921 start.go:83] releasing machines lock for "old-k8s-version-670000", held for 2.403380167s
	W0729 10:47:04.944175    5921 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:04.962387    5921 out.go:177] * Deleting "old-k8s-version-670000" in qemu2 ...
	W0729 10:47:04.985486    5921 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:04.985510    5921 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:09.987456    5921 start.go:360] acquireMachinesLock for old-k8s-version-670000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:11.922344    5921 start.go:364] duration metric: took 1.934856459s to acquireMachinesLock for "old-k8s-version-670000"
	I0729 10:47:11.922569    5921 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:11.922931    5921 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:11.933748    5921 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:11.984248    5921 start.go:159] libmachine.API.Create for "old-k8s-version-670000" (driver="qemu2")
	I0729 10:47:11.984309    5921 client.go:168] LocalClient.Create starting
	I0729 10:47:11.984442    5921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:11.984508    5921 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:11.984528    5921 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:11.984593    5921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:11.984636    5921 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:11.984648    5921 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:11.985106    5921 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:12.149350    5921 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:12.197850    5921 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:12.197855    5921 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:12.198026    5921 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2
	I0729 10:47:12.207569    5921 main.go:141] libmachine: STDOUT: 
	I0729 10:47:12.207586    5921 main.go:141] libmachine: STDERR: 
	I0729 10:47:12.207657    5921 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2 +20000M
	I0729 10:47:12.215948    5921 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:12.215963    5921 main.go:141] libmachine: STDERR: 
	I0729 10:47:12.215972    5921 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2
	I0729 10:47:12.215976    5921 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:12.215988    5921 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:12.216020    5921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a9:b3:76:da:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2
	I0729 10:47:12.217657    5921 main.go:141] libmachine: STDOUT: 
	I0729 10:47:12.217672    5921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:12.217686    5921 client.go:171] duration metric: took 233.3785ms to LocalClient.Create
	I0729 10:47:14.219822    5921 start.go:128] duration metric: took 2.296924417s to createHost
	I0729 10:47:14.219877    5921 start.go:83] releasing machines lock for "old-k8s-version-670000", held for 2.2974785s
	W0729 10:47:14.220283    5921 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:14.230851    5921 out.go:177] 
	W0729 10:47:14.234882    5921 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:14.234913    5921 out.go:239] * 
	* 
	W0729 10:47:14.237698    5921 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:14.245857    5921 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-670000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (64.04ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.8826795s)

                                                
                                                
-- stdout --
	* [no-preload-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-143000" primary control-plane node in "no-preload-143000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-143000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:09.544360    6031 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:09.544484    6031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:09.544487    6031 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:09.544489    6031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:09.544622    6031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:09.545667    6031 out.go:298] Setting JSON to false
	I0729 10:47:09.561530    6031 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4593,"bootTime":1722270636,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:09.561604    6031 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:09.568402    6031 out.go:177] * [no-preload-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:09.576246    6031 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:09.576313    6031 notify.go:220] Checking for updates...
	I0729 10:47:09.582213    6031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:09.587456    6031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:09.590285    6031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:09.591582    6031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:09.594194    6031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:09.597540    6031 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:09.597614    6031 config.go:182] Loaded profile config "old-k8s-version-670000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 10:47:09.597661    6031 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:09.602070    6031 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:47:09.609191    6031 start.go:297] selected driver: qemu2
	I0729 10:47:09.609200    6031 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:47:09.609209    6031 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:09.611493    6031 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:47:09.614235    6031 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:47:09.617351    6031 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:47:09.617369    6031 cni.go:84] Creating CNI manager for ""
	I0729 10:47:09.617376    6031 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:09.617380    6031 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:47:09.617411    6031 start.go:340] cluster config:
	{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:09.621025    6031 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.627126    6031 out.go:177] * Starting "no-preload-143000" primary control-plane node in "no-preload-143000" cluster
	I0729 10:47:09.631187    6031 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:47:09.631272    6031 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/no-preload-143000/config.json ...
	I0729 10:47:09.631290    6031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/no-preload-143000/config.json: {Name:mk36cbc2fd58ed051cccb66b425c47e892f814d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:47:09.631289    6031 cache.go:107] acquiring lock: {Name:mk1114562d4c081fffb3c8738a4883b61ba8ad55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.631314    6031 cache.go:107] acquiring lock: {Name:mk26d0ad414e698fbd445f4e17a4aa0084bb48be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.631326    6031 cache.go:107] acquiring lock: {Name:mkc1a77840518fffe8efffdc32e163706199a2c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.631382    6031 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 10:47:09.631398    6031 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.375µs
	I0729 10:47:09.631404    6031 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 10:47:09.631414    6031 cache.go:107] acquiring lock: {Name:mkdc63422c9ce28682439517153d8f4c18cef181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.631516    6031 cache.go:107] acquiring lock: {Name:mkd97664f8acfa40c4328a23e1cfbae6ac54cbb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.631532    6031 cache.go:107] acquiring lock: {Name:mk526181859e11d43e4f0dec6d9e77c558faad5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.631572    6031 start.go:360] acquireMachinesLock for no-preload-143000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:09.631568    6031 cache.go:107] acquiring lock: {Name:mk47ab4c41f58fc855ef5d9760b55923f88671e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.631615    6031 start.go:364] duration metric: took 34.041µs to acquireMachinesLock for "no-preload-143000"
	I0729 10:47:09.631628    6031 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 10:47:09.631630    6031 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 10:47:09.631589    6031 cache.go:107] acquiring lock: {Name:mk04c10ef9233752c07641e64e07cc7f2cf61bd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.631628    6031 start.go:93] Provisioning new machine with config: &{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:09.631710    6031 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:09.631666    6031 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 10:47:09.631725    6031 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 10:47:09.631772    6031 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 10:47:09.631658    6031 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 10:47:09.631842    6031 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 10:47:09.640232    6031 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:09.644017    6031 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 10:47:09.644021    6031 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 10:47:09.645222    6031 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 10:47:09.645311    6031 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 10:47:09.645859    6031 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 10:47:09.646233    6031 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 10:47:09.646664    6031 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 10:47:09.658394    6031 start.go:159] libmachine.API.Create for "no-preload-143000" (driver="qemu2")
	I0729 10:47:09.658414    6031 client.go:168] LocalClient.Create starting
	I0729 10:47:09.658495    6031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:09.658526    6031 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:09.658538    6031 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:09.658581    6031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:09.658605    6031 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:09.658611    6031 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:09.658944    6031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:09.817321    6031 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:09.900279    6031 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:09.900303    6031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:09.900482    6031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2
	I0729 10:47:09.910611    6031 main.go:141] libmachine: STDOUT: 
	I0729 10:47:09.910628    6031 main.go:141] libmachine: STDERR: 
	I0729 10:47:09.910690    6031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2 +20000M
	I0729 10:47:09.919391    6031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:09.919418    6031 main.go:141] libmachine: STDERR: 
	I0729 10:47:09.919441    6031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2
	I0729 10:47:09.919444    6031 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:09.919461    6031 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:09.919489    6031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:52:45:83:e9:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2
	I0729 10:47:09.921774    6031 main.go:141] libmachine: STDOUT: 
	I0729 10:47:09.921798    6031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:09.921819    6031 client.go:171] duration metric: took 263.41ms to LocalClient.Create
	I0729 10:47:10.075268    6031 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 10:47:10.078280    6031 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 10:47:10.096843    6031 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 10:47:10.110634    6031 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 10:47:10.113761    6031 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 10:47:10.123544    6031 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 10:47:10.154053    6031 cache.go:162] opening:  /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 10:47:10.190051    6031 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 10:47:10.190084    6031 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 558.626584ms
	I0729 10:47:10.190111    6031 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 10:47:11.922099    6031 start.go:128] duration metric: took 2.29042275s to createHost
	I0729 10:47:11.922180    6031 start.go:83] releasing machines lock for "no-preload-143000", held for 2.290623458s
	W0729 10:47:11.922236    6031 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:11.946906    6031 out.go:177] * Deleting "no-preload-143000" in qemu2 ...
	W0729 10:47:11.970130    6031 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:11.970162    6031 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:13.249876    6031 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 10:47:13.249959    6031 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 3.618557417s
	I0729 10:47:13.250053    6031 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 10:47:13.682562    6031 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 10:47:13.682616    6031 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 4.051434708s
	I0729 10:47:13.682642    6031 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 10:47:13.880592    6031 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 10:47:13.880659    6031 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.24928425s
	I0729 10:47:13.880688    6031 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 10:47:14.320185    6031 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 10:47:14.320201    6031 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.688875917s
	I0729 10:47:14.320210    6031 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 10:47:14.560351    6031 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 10:47:14.560360    6031 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.929224917s
	I0729 10:47:14.560369    6031 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 10:47:16.972195    6031 start.go:360] acquireMachinesLock for no-preload-143000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:16.972625    6031 start.go:364] duration metric: took 361.667µs to acquireMachinesLock for "no-preload-143000"
	I0729 10:47:16.972694    6031 start.go:93] Provisioning new machine with config: &{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:16.972969    6031 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:16.982536    6031 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:17.035697    6031 start.go:159] libmachine.API.Create for "no-preload-143000" (driver="qemu2")
	I0729 10:47:17.035739    6031 client.go:168] LocalClient.Create starting
	I0729 10:47:17.035849    6031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:17.035901    6031 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:17.035919    6031 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:17.035987    6031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:17.036017    6031 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:17.036032    6031 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:17.036560    6031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:17.203032    6031 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:17.335236    6031 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:17.335243    6031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:17.335462    6031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2
	I0729 10:47:17.344391    6031 main.go:141] libmachine: STDOUT: 
	I0729 10:47:17.344413    6031 main.go:141] libmachine: STDERR: 
	I0729 10:47:17.344471    6031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2 +20000M
	I0729 10:47:17.352742    6031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:17.352754    6031 main.go:141] libmachine: STDERR: 
	I0729 10:47:17.352769    6031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2
	I0729 10:47:17.352772    6031 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:17.352782    6031 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:17.352820    6031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ed:ad:27:ab:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2
	I0729 10:47:17.354452    6031 main.go:141] libmachine: STDOUT: 
	I0729 10:47:17.354468    6031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:17.354485    6031 client.go:171] duration metric: took 318.751459ms to LocalClient.Create
	I0729 10:47:17.712235    6031 cache.go:157] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 10:47:17.712304    6031 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.0811285s
	I0729 10:47:17.712328    6031 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 10:47:17.712368    6031 cache.go:87] Successfully saved all images to host disk.
	I0729 10:47:19.356728    6031 start.go:128] duration metric: took 2.383762334s to createHost
	I0729 10:47:19.356820    6031 start.go:83] releasing machines lock for "no-preload-143000", held for 2.384242084s
	W0729 10:47:19.357143    6031 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:19.368667    6031 out.go:177] 
	W0729 10:47:19.376872    6031 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:19.376907    6031 out.go:239] * 
	* 
	W0729 10:47:19.379453    6031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:19.386741    6031 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (59.794833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-670000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-670000 create -f testdata/busybox.yaml: exit status 1 (29.837209ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-670000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-670000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (28.578875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (29.152334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-670000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-670000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-670000 describe deploy/metrics-server -n kube-system: exit status 1 (26.363625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-670000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-670000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (28.2185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-670000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-670000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (6.159676458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-670000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-670000" primary control-plane node in "old-k8s-version-670000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-670000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-670000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:18.309732    6113 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:18.309863    6113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:18.309870    6113 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:18.309872    6113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:18.310019    6113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:18.310994    6113 out.go:298] Setting JSON to false
	I0729 10:47:18.326969    6113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4602,"bootTime":1722270636,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:18.327037    6113 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:18.331711    6113 out.go:177] * [old-k8s-version-670000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:18.339797    6113 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:18.339842    6113 notify.go:220] Checking for updates...
	I0729 10:47:18.346719    6113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:18.348219    6113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:18.351792    6113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:18.354762    6113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:18.357832    6113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:18.361056    6113 config.go:182] Loaded profile config "old-k8s-version-670000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 10:47:18.364741    6113 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 10:47:18.367747    6113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:18.371726    6113 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:47:18.378705    6113 start.go:297] selected driver: qemu2
	I0729 10:47:18.378711    6113 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:18.378761    6113 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:18.381181    6113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:47:18.381207    6113 cni.go:84] Creating CNI manager for ""
	I0729 10:47:18.381214    6113 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 10:47:18.381240    6113 start.go:340] cluster config:
	{Name:old-k8s-version-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-670000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:18.384907    6113 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:18.392763    6113 out.go:177] * Starting "old-k8s-version-670000" primary control-plane node in "old-k8s-version-670000" cluster
	I0729 10:47:18.397719    6113 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:47:18.397733    6113 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 10:47:18.397745    6113 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:18.397803    6113 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:18.397811    6113 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 10:47:18.397863    6113 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/old-k8s-version-670000/config.json ...
	I0729 10:47:18.398337    6113 start.go:360] acquireMachinesLock for old-k8s-version-670000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:19.356984    6113 start.go:364] duration metric: took 958.591166ms to acquireMachinesLock for "old-k8s-version-670000"
	I0729 10:47:19.357165    6113 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:19.357202    6113 fix.go:54] fixHost starting: 
	I0729 10:47:19.357938    6113 fix.go:112] recreateIfNeeded on old-k8s-version-670000: state=Stopped err=<nil>
	W0729 10:47:19.357989    6113 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:19.372674    6113 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-670000" ...
	I0729 10:47:19.379809    6113 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:19.380003    6113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a9:b3:76:da:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2
	I0729 10:47:19.389170    6113 main.go:141] libmachine: STDOUT: 
	I0729 10:47:19.389238    6113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:19.389354    6113 fix.go:56] duration metric: took 32.165625ms for fixHost
	I0729 10:47:19.389367    6113 start.go:83] releasing machines lock for "old-k8s-version-670000", held for 32.347542ms
	W0729 10:47:19.389408    6113 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:19.389580    6113 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:19.389596    6113 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:24.391737    6113 start.go:360] acquireMachinesLock for old-k8s-version-670000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:24.392192    6113 start.go:364] duration metric: took 331.5µs to acquireMachinesLock for "old-k8s-version-670000"
	I0729 10:47:24.392321    6113 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:24.392342    6113 fix.go:54] fixHost starting: 
	I0729 10:47:24.393064    6113 fix.go:112] recreateIfNeeded on old-k8s-version-670000: state=Stopped err=<nil>
	W0729 10:47:24.393090    6113 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:24.397645    6113 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-670000" ...
	I0729 10:47:24.399478    6113 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:24.399726    6113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a9:b3:76:da:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/old-k8s-version-670000/disk.qcow2
	I0729 10:47:24.407709    6113 main.go:141] libmachine: STDOUT: 
	I0729 10:47:24.407771    6113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:24.407852    6113 fix.go:56] duration metric: took 15.509125ms for fixHost
	I0729 10:47:24.407865    6113 start.go:83] releasing machines lock for "old-k8s-version-670000", held for 15.6495ms
	W0729 10:47:24.408054    6113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-670000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-670000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:24.416469    6113 out.go:177] 
	W0729 10:47:24.417714    6113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:24.417761    6113 out.go:239] * 
	* 
	W0729 10:47:24.419595    6113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:24.433403    6113 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-670000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (65.72125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-143000 create -f testdata/busybox.yaml: exit status 1 (28.826291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-143000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-143000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (28.073542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (28.25625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-143000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-143000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-143000 describe deploy/metrics-server -n kube-system: exit status 1 (26.584875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-143000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-143000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (29.149292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.689733833s)

                                                
                                                
-- stdout --
	* [no-preload-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-143000" primary control-plane node in "no-preload-143000" cluster
	* Restarting existing qemu2 VM for "no-preload-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:21.810693    6150 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:21.810835    6150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:21.810838    6150 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:21.810841    6150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:21.811011    6150 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:21.812002    6150 out.go:298] Setting JSON to false
	I0729 10:47:21.827767    6150 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4605,"bootTime":1722270636,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:21.827838    6150 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:21.832966    6150 out.go:177] * [no-preload-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:21.839067    6150 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:21.839119    6150 notify.go:220] Checking for updates...
	I0729 10:47:21.846057    6150 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:21.849040    6150 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:21.851948    6150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:21.854961    6150 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:21.858022    6150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:21.861130    6150 config.go:182] Loaded profile config "no-preload-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 10:47:21.861375    6150 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:21.865957    6150 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:47:21.871994    6150 start.go:297] selected driver: qemu2
	I0729 10:47:21.872004    6150 start.go:901] validating driver "qemu2" against &{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:21.872074    6150 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:21.874264    6150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:47:21.874314    6150 cni.go:84] Creating CNI manager for ""
	I0729 10:47:21.874321    6150 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:21.874347    6150 start.go:340] cluster config:
	{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-143000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:21.877782    6150 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.885064    6150 out.go:177] * Starting "no-preload-143000" primary control-plane node in "no-preload-143000" cluster
	I0729 10:47:21.888944    6150 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:47:21.889034    6150 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/no-preload-143000/config.json ...
	I0729 10:47:21.889068    6150 cache.go:107] acquiring lock: {Name:mk04c10ef9233752c07641e64e07cc7f2cf61bd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.889067    6150 cache.go:107] acquiring lock: {Name:mk1114562d4c081fffb3c8738a4883b61ba8ad55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.889089    6150 cache.go:107] acquiring lock: {Name:mkc1a77840518fffe8efffdc32e163706199a2c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.889148    6150 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 10:47:21.889149    6150 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 10:47:21.889153    6150 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 10:47:21.889156    6150 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 98.791µs
	I0729 10:47:21.889159    6150 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 70.458µs
	I0729 10:47:21.889163    6150 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 10:47:21.889166    6150 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 10:47:21.889067    6150 cache.go:107] acquiring lock: {Name:mk26d0ad414e698fbd445f4e17a4aa0084bb48be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.889184    6150 cache.go:107] acquiring lock: {Name:mk526181859e11d43e4f0dec6d9e77c558faad5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.889181    6150 cache.go:107] acquiring lock: {Name:mk47ab4c41f58fc855ef5d9760b55923f88671e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.889155    6150 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 90.667µs
	I0729 10:47:21.889249    6150 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 10:47:21.889254    6150 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 190.084µs
	I0729 10:47:21.889171    6150 cache.go:107] acquiring lock: {Name:mkd97664f8acfa40c4328a23e1cfbae6ac54cbb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.889239    6150 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 10:47:21.889273    6150 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 10:47:21.889239    6150 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 10:47:21.889291    6150 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 10:47:21.889296    6150 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 125.041µs
	I0729 10:47:21.889259    6150 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 10:47:21.889284    6150 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 90.5µs
	I0729 10:47:21.889303    6150 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 10:47:21.889294    6150 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 114.167µs
	I0729 10:47:21.889310    6150 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 10:47:21.889301    6150 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 10:47:21.889325    6150 cache.go:107] acquiring lock: {Name:mkdc63422c9ce28682439517153d8f4c18cef181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:21.889380    6150 cache.go:115] /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 10:47:21.889385    6150 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 163.666µs
	I0729 10:47:21.889393    6150 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 10:47:21.889398    6150 cache.go:87] Successfully saved all images to host disk.
	I0729 10:47:21.889475    6150 start.go:360] acquireMachinesLock for no-preload-143000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:21.889504    6150 start.go:364] duration metric: took 22.916µs to acquireMachinesLock for "no-preload-143000"
	I0729 10:47:21.889514    6150 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:21.889519    6150 fix.go:54] fixHost starting: 
	I0729 10:47:21.889634    6150 fix.go:112] recreateIfNeeded on no-preload-143000: state=Stopped err=<nil>
	W0729 10:47:21.889643    6150 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:21.896987    6150 out.go:177] * Restarting existing qemu2 VM for "no-preload-143000" ...
	I0729 10:47:21.900985    6150 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:21.901033    6150 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ed:ad:27:ab:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2
	I0729 10:47:21.903123    6150 main.go:141] libmachine: STDOUT: 
	I0729 10:47:21.903145    6150 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:21.903172    6150 fix.go:56] duration metric: took 13.653666ms for fixHost
	I0729 10:47:21.903176    6150 start.go:83] releasing machines lock for "no-preload-143000", held for 13.668209ms
	W0729 10:47:21.903182    6150 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:21.903233    6150 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:21.903238    6150 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:26.905245    6150 start.go:360] acquireMachinesLock for no-preload-143000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:27.396935    6150 start.go:364] duration metric: took 491.605375ms to acquireMachinesLock for "no-preload-143000"
	I0729 10:47:27.397072    6150 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:27.397092    6150 fix.go:54] fixHost starting: 
	I0729 10:47:27.397898    6150 fix.go:112] recreateIfNeeded on no-preload-143000: state=Stopped err=<nil>
	W0729 10:47:27.397927    6150 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:27.407428    6150 out.go:177] * Restarting existing qemu2 VM for "no-preload-143000" ...
	I0729 10:47:27.422502    6150 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:27.422706    6150 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ed:ad:27:ab:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/no-preload-143000/disk.qcow2
	I0729 10:47:27.432446    6150 main.go:141] libmachine: STDOUT: 
	I0729 10:47:27.432535    6150 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:27.432620    6150 fix.go:56] duration metric: took 35.52975ms for fixHost
	I0729 10:47:27.432639    6150 start.go:83] releasing machines lock for "no-preload-143000", held for 35.618041ms
	W0729 10:47:27.432828    6150 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:27.441492    6150 out.go:177] 
	W0729 10:47:27.445476    6150 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:27.445495    6150 out.go:239] * 
	* 
	W0729 10:47:27.447697    6150 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:27.459435    6150 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (59.339959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-670000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (31.51175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-670000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-670000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-670000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.875334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-670000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-670000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (29.101334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-670000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (28.7785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-670000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-670000 --alsologtostderr -v=1: exit status 83 (39.83425ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-670000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-670000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:24.694099    6169 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:24.694501    6169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:24.694504    6169 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:24.694507    6169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:24.694684    6169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:24.694893    6169 out.go:298] Setting JSON to false
	I0729 10:47:24.694900    6169 mustload.go:65] Loading cluster: old-k8s-version-670000
	I0729 10:47:24.695081    6169 config.go:182] Loaded profile config "old-k8s-version-670000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 10:47:24.698990    6169 out.go:177] * The control-plane node old-k8s-version-670000 host is not running: state=Stopped
	I0729 10:47:24.702015    6169 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-670000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-670000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (28.780833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (27.880875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-670000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-966000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-966000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.87177725s)

                                                
                                                
-- stdout --
	* [embed-certs-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-966000" primary control-plane node in "embed-certs-966000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-966000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:24.999245    6186 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:24.999367    6186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:24.999370    6186 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:24.999372    6186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:24.999491    6186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:25.000536    6186 out.go:298] Setting JSON to false
	I0729 10:47:25.016225    6186 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4609,"bootTime":1722270636,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:25.016309    6186 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:25.020994    6186 out.go:177] * [embed-certs-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:25.027925    6186 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:25.027967    6186 notify.go:220] Checking for updates...
	I0729 10:47:25.034950    6186 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:25.037984    6186 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:25.040939    6186 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:25.043938    6186 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:25.046973    6186 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:25.050238    6186 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:25.050308    6186 config.go:182] Loaded profile config "no-preload-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 10:47:25.050365    6186 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:25.054966    6186 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:47:25.061917    6186 start.go:297] selected driver: qemu2
	I0729 10:47:25.061925    6186 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:47:25.061931    6186 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:25.064297    6186 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:47:25.067889    6186 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:47:25.071027    6186 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:47:25.071042    6186 cni.go:84] Creating CNI manager for ""
	I0729 10:47:25.071048    6186 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:25.071052    6186 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:47:25.071088    6186 start.go:340] cluster config:
	{Name:embed-certs-966000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:25.074733    6186 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:25.081890    6186 out.go:177] * Starting "embed-certs-966000" primary control-plane node in "embed-certs-966000" cluster
	I0729 10:47:25.085778    6186 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:47:25.085790    6186 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:47:25.085798    6186 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:25.085856    6186 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:25.085867    6186 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:47:25.085922    6186 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/embed-certs-966000/config.json ...
	I0729 10:47:25.085935    6186 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/embed-certs-966000/config.json: {Name:mkf117cdeb3a8deeffa776d2f120f7f69a8a294b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:47:25.086279    6186 start.go:360] acquireMachinesLock for embed-certs-966000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:25.086314    6186 start.go:364] duration metric: took 29µs to acquireMachinesLock for "embed-certs-966000"
	I0729 10:47:25.086327    6186 start.go:93] Provisioning new machine with config: &{Name:embed-certs-966000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:25.086376    6186 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:25.095761    6186 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:25.113958    6186 start.go:159] libmachine.API.Create for "embed-certs-966000" (driver="qemu2")
	I0729 10:47:25.113984    6186 client.go:168] LocalClient.Create starting
	I0729 10:47:25.114046    6186 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:25.114078    6186 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:25.114086    6186 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:25.114130    6186 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:25.114158    6186 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:25.114168    6186 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:25.114595    6186 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:25.267413    6186 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:25.375445    6186 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:25.375453    6186 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:25.375632    6186 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2
	I0729 10:47:25.384980    6186 main.go:141] libmachine: STDOUT: 
	I0729 10:47:25.384996    6186 main.go:141] libmachine: STDERR: 
	I0729 10:47:25.385050    6186 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2 +20000M
	I0729 10:47:25.392856    6186 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:25.392870    6186 main.go:141] libmachine: STDERR: 
	I0729 10:47:25.392878    6186 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2
	I0729 10:47:25.392883    6186 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:25.392893    6186 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:25.392916    6186 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:35:72:b6:a7:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2
	I0729 10:47:25.394490    6186 main.go:141] libmachine: STDOUT: 
	I0729 10:47:25.394503    6186 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:25.394534    6186 client.go:171] duration metric: took 280.545458ms to LocalClient.Create
	I0729 10:47:27.396670    6186 start.go:128] duration metric: took 2.31034075s to createHost
	I0729 10:47:27.396778    6186 start.go:83] releasing machines lock for "embed-certs-966000", held for 2.310460584s
	W0729 10:47:27.396847    6186 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:27.419463    6186 out.go:177] * Deleting "embed-certs-966000" in qemu2 ...
	W0729 10:47:27.473069    6186 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:27.473109    6186 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:32.475192    6186 start.go:360] acquireMachinesLock for embed-certs-966000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:32.475669    6186 start.go:364] duration metric: took 355.417µs to acquireMachinesLock for "embed-certs-966000"
	I0729 10:47:32.475819    6186 start.go:93] Provisioning new machine with config: &{Name:embed-certs-966000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:32.476091    6186 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:32.484579    6186 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:32.534054    6186 start.go:159] libmachine.API.Create for "embed-certs-966000" (driver="qemu2")
	I0729 10:47:32.534119    6186 client.go:168] LocalClient.Create starting
	I0729 10:47:32.534245    6186 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:32.534303    6186 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:32.534318    6186 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:32.534378    6186 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:32.534422    6186 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:32.534443    6186 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:32.534957    6186 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:32.704372    6186 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:32.779807    6186 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:32.779812    6186 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:32.779977    6186 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2
	I0729 10:47:32.789065    6186 main.go:141] libmachine: STDOUT: 
	I0729 10:47:32.789081    6186 main.go:141] libmachine: STDERR: 
	I0729 10:47:32.789126    6186 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2 +20000M
	I0729 10:47:32.797003    6186 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:32.797024    6186 main.go:141] libmachine: STDERR: 
	I0729 10:47:32.797034    6186 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2
	I0729 10:47:32.797047    6186 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:32.797054    6186 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:32.797089    6186 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d7:d9:8d:4f:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2
	I0729 10:47:32.798799    6186 main.go:141] libmachine: STDOUT: 
	I0729 10:47:32.798816    6186 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:32.798828    6186 client.go:171] duration metric: took 264.711792ms to LocalClient.Create
	I0729 10:47:34.800941    6186 start.go:128] duration metric: took 2.324883s to createHost
	I0729 10:47:34.800994    6186 start.go:83] releasing machines lock for "embed-certs-966000", held for 2.325370125s
	W0729 10:47:34.801435    6186 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-966000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-966000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:34.810422    6186 out.go:177] 
	W0729 10:47:34.818488    6186 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:34.818528    6186 out.go:239] * 
	* 
	W0729 10:47:34.821178    6186 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:34.829513    6186 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-966000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (67.174625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-143000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (31.246541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-143000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-143000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-143000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.838583ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-143000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-143000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (28.888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-143000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (28.973417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-143000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-143000 --alsologtostderr -v=1: exit status 83 (43.625667ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-143000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-143000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:27.718006    6208 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:27.718171    6208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:27.718174    6208 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:27.718176    6208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:27.718299    6208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:27.718531    6208 out.go:298] Setting JSON to false
	I0729 10:47:27.718537    6208 mustload.go:65] Loading cluster: no-preload-143000
	I0729 10:47:27.718751    6208 config.go:182] Loaded profile config "no-preload-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 10:47:27.723494    6208 out.go:177] * The control-plane node no-preload-143000 host is not running: state=Stopped
	I0729 10:47:27.729484    6208 out.go:177]   To start a cluster, run: "minikube start -p no-preload-143000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-143000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (28.807083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (28.847083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-371000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-371000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.982612625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-371000" primary control-plane node in "default-k8s-diff-port-371000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-371000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:28.138365    6232 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:28.138483    6232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:28.138487    6232 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:28.138489    6232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:28.138633    6232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:28.139709    6232 out.go:298] Setting JSON to false
	I0729 10:47:28.155725    6232 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4612,"bootTime":1722270636,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:28.155793    6232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:28.159517    6232 out.go:177] * [default-k8s-diff-port-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:28.166489    6232 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:28.166594    6232 notify.go:220] Checking for updates...
	I0729 10:47:28.170934    6232 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:28.174437    6232 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:28.177460    6232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:28.180457    6232 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:28.183440    6232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:28.186768    6232 config.go:182] Loaded profile config "embed-certs-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:28.186831    6232 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:28.186890    6232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:28.191396    6232 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:47:28.198491    6232 start.go:297] selected driver: qemu2
	I0729 10:47:28.198500    6232 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:47:28.198507    6232 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:28.200733    6232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:47:28.204444    6232 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:47:28.207438    6232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:47:28.207466    6232 cni.go:84] Creating CNI manager for ""
	I0729 10:47:28.207473    6232 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:28.207478    6232 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:47:28.207508    6232 start.go:340] cluster config:
	{Name:default-k8s-diff-port-371000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:28.211074    6232 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:28.218293    6232 out.go:177] * Starting "default-k8s-diff-port-371000" primary control-plane node in "default-k8s-diff-port-371000" cluster
	I0729 10:47:28.222427    6232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:47:28.222441    6232 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:47:28.222448    6232 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:28.222508    6232 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:28.222514    6232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:47:28.222569    6232 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/default-k8s-diff-port-371000/config.json ...
	I0729 10:47:28.222580    6232 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/default-k8s-diff-port-371000/config.json: {Name:mkd36429788153123d3d94859793cef67bf1c2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:47:28.222803    6232 start.go:360] acquireMachinesLock for default-k8s-diff-port-371000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:28.222841    6232 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "default-k8s-diff-port-371000"
	I0729 10:47:28.222855    6232 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:28.222885    6232 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:28.231468    6232 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:28.249023    6232 start.go:159] libmachine.API.Create for "default-k8s-diff-port-371000" (driver="qemu2")
	I0729 10:47:28.249054    6232 client.go:168] LocalClient.Create starting
	I0729 10:47:28.249127    6232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:28.249161    6232 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:28.249170    6232 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:28.249209    6232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:28.249234    6232 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:28.249242    6232 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:28.249602    6232 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:28.405212    6232 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:28.556027    6232 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:28.556033    6232 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:28.556227    6232 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2
	I0729 10:47:28.565835    6232 main.go:141] libmachine: STDOUT: 
	I0729 10:47:28.565857    6232 main.go:141] libmachine: STDERR: 
	I0729 10:47:28.565913    6232 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2 +20000M
	I0729 10:47:28.573777    6232 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:28.573790    6232 main.go:141] libmachine: STDERR: 
	I0729 10:47:28.573814    6232 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2
	I0729 10:47:28.573823    6232 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:28.573834    6232 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:28.573857    6232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8b:97:22:19:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2
	I0729 10:47:28.575416    6232 main.go:141] libmachine: STDOUT: 
	I0729 10:47:28.575429    6232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:28.575447    6232 client.go:171] duration metric: took 326.396875ms to LocalClient.Create
	I0729 10:47:30.577573    6232 start.go:128] duration metric: took 2.354736459s to createHost
	I0729 10:47:30.577618    6232 start.go:83] releasing machines lock for "default-k8s-diff-port-371000", held for 2.354838292s
	W0729 10:47:30.577679    6232 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:30.592909    6232 out.go:177] * Deleting "default-k8s-diff-port-371000" in qemu2 ...
	W0729 10:47:30.622173    6232 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:30.622196    6232 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:35.624242    6232 start.go:360] acquireMachinesLock for default-k8s-diff-port-371000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:35.624666    6232 start.go:364] duration metric: took 292.375µs to acquireMachinesLock for "default-k8s-diff-port-371000"
	I0729 10:47:35.624830    6232 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:35.625158    6232 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:35.634851    6232 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:35.686056    6232 start.go:159] libmachine.API.Create for "default-k8s-diff-port-371000" (driver="qemu2")
	I0729 10:47:35.686105    6232 client.go:168] LocalClient.Create starting
	I0729 10:47:35.686207    6232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:35.686274    6232 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:35.686292    6232 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:35.686357    6232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:35.686394    6232 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:35.686407    6232 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:35.686946    6232 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:35.852576    6232 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:36.025010    6232 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:36.025018    6232 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:36.025220    6232 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2
	I0729 10:47:36.034757    6232 main.go:141] libmachine: STDOUT: 
	I0729 10:47:36.034777    6232 main.go:141] libmachine: STDERR: 
	I0729 10:47:36.034821    6232 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2 +20000M
	I0729 10:47:36.042628    6232 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:36.042642    6232 main.go:141] libmachine: STDERR: 
	I0729 10:47:36.042654    6232 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2
	I0729 10:47:36.042671    6232 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:36.042681    6232 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:36.042706    6232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:59:9f:d0:0e:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2
	I0729 10:47:36.044319    6232 main.go:141] libmachine: STDOUT: 
	I0729 10:47:36.044335    6232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:36.044347    6232 client.go:171] duration metric: took 358.2485ms to LocalClient.Create
	I0729 10:47:38.046495    6232 start.go:128] duration metric: took 2.421376791s to createHost
	I0729 10:47:38.046561    6232 start.go:83] releasing machines lock for "default-k8s-diff-port-371000", held for 2.421940042s
	W0729 10:47:38.047000    6232 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:38.062705    6232 out.go:177] 
	W0729 10:47:38.066851    6232 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:38.066882    6232 out.go:239] * 
	* 
	W0729 10:47:38.069666    6232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:38.079694    6232 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-371000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (64.196666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-966000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-966000 create -f testdata/busybox.yaml: exit status 1 (28.999375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-966000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-966000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (27.831208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (28.36575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-966000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-966000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-966000 describe deploy/metrics-server -n kube-system: exit status 1 (26.733709ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-966000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-966000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (28.124209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-371000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-371000 create -f testdata/busybox.yaml: exit status 1 (30.767375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-371000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-371000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (29.108958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (28.895958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-371000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-371000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-371000 describe deploy/metrics-server -n kube-system: exit status 1 (26.34025ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-371000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-371000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (28.78575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-966000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-966000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.187951042s)

                                                
                                                
-- stdout --
	* [embed-certs-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-966000" primary control-plane node in "embed-certs-966000" cluster
	* Restarting existing qemu2 VM for "embed-certs-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:38.888659    6305 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:38.888789    6305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:38.888792    6305 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:38.888795    6305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:38.888933    6305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:38.889906    6305 out.go:298] Setting JSON to false
	I0729 10:47:38.905832    6305 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4622,"bootTime":1722270636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:38.905897    6305 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:38.911424    6305 out.go:177] * [embed-certs-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:38.918398    6305 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:38.918459    6305 notify.go:220] Checking for updates...
	I0729 10:47:38.925327    6305 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:38.928435    6305 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:38.931405    6305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:38.934391    6305 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:38.937374    6305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:38.940654    6305 config.go:182] Loaded profile config "embed-certs-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:38.940899    6305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:38.945332    6305 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:47:38.952418    6305 start.go:297] selected driver: qemu2
	I0729 10:47:38.952428    6305 start.go:901] validating driver "qemu2" against &{Name:embed-certs-966000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:38.952493    6305 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:38.954888    6305 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:47:38.954927    6305 cni.go:84] Creating CNI manager for ""
	I0729 10:47:38.954935    6305 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:38.954965    6305 start.go:340] cluster config:
	{Name:embed-certs-966000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-966000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:38.958595    6305 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:38.966394    6305 out.go:177] * Starting "embed-certs-966000" primary control-plane node in "embed-certs-966000" cluster
	I0729 10:47:38.970398    6305 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:47:38.970413    6305 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:47:38.970425    6305 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:38.970483    6305 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:38.970488    6305 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:47:38.970551    6305 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/embed-certs-966000/config.json ...
	I0729 10:47:38.971009    6305 start.go:360] acquireMachinesLock for embed-certs-966000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:38.971036    6305 start.go:364] duration metric: took 21µs to acquireMachinesLock for "embed-certs-966000"
	I0729 10:47:38.971045    6305 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:38.971050    6305 fix.go:54] fixHost starting: 
	I0729 10:47:38.971162    6305 fix.go:112] recreateIfNeeded on embed-certs-966000: state=Stopped err=<nil>
	W0729 10:47:38.971170    6305 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:38.975324    6305 out.go:177] * Restarting existing qemu2 VM for "embed-certs-966000" ...
	I0729 10:47:38.983314    6305 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:38.983355    6305 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d7:d9:8d:4f:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2
	I0729 10:47:38.985371    6305 main.go:141] libmachine: STDOUT: 
	I0729 10:47:38.985392    6305 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:38.985419    6305 fix.go:56] duration metric: took 14.3695ms for fixHost
	I0729 10:47:38.985423    6305 start.go:83] releasing machines lock for "embed-certs-966000", held for 14.384333ms
	W0729 10:47:38.985429    6305 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:38.985468    6305 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:38.985473    6305 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:43.987544    6305 start.go:360] acquireMachinesLock for embed-certs-966000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:43.987979    6305 start.go:364] duration metric: took 324.459µs to acquireMachinesLock for "embed-certs-966000"
	I0729 10:47:43.988113    6305 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:43.988132    6305 fix.go:54] fixHost starting: 
	I0729 10:47:43.988849    6305 fix.go:112] recreateIfNeeded on embed-certs-966000: state=Stopped err=<nil>
	W0729 10:47:43.988876    6305 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:43.997206    6305 out.go:177] * Restarting existing qemu2 VM for "embed-certs-966000" ...
	I0729 10:47:44.000237    6305 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:44.000662    6305 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d7:d9:8d:4f:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/embed-certs-966000/disk.qcow2
	I0729 10:47:44.009930    6305 main.go:141] libmachine: STDOUT: 
	I0729 10:47:44.010006    6305 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:44.010111    6305 fix.go:56] duration metric: took 21.976209ms for fixHost
	I0729 10:47:44.010130    6305 start.go:83] releasing machines lock for "embed-certs-966000", held for 22.130584ms
	W0729 10:47:44.010331    6305 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-966000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-966000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:44.018250    6305 out.go:177] 
	W0729 10:47:44.022255    6305 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:44.022278    6305 out.go:239] * 
	* 
	W0729 10:47:44.024657    6305 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:44.036227    6305 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-966000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (70.236917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-371000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-371000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.198834417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-371000" primary control-plane node in "default-k8s-diff-port-371000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:41.950056    6328 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:41.950195    6328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:41.950199    6328 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:41.950201    6328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:41.950356    6328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:41.951374    6328 out.go:298] Setting JSON to false
	I0729 10:47:41.967182    6328 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4625,"bootTime":1722270636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:41.967243    6328 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:41.972318    6328 out.go:177] * [default-k8s-diff-port-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:41.979492    6328 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:41.979554    6328 notify.go:220] Checking for updates...
	I0729 10:47:41.985466    6328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:41.988495    6328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:41.989914    6328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:42.001089    6328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:42.004488    6328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:42.007684    6328 config.go:182] Loaded profile config "default-k8s-diff-port-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:42.007967    6328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:42.012485    6328 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:47:42.019427    6328 start.go:297] selected driver: qemu2
	I0729 10:47:42.019434    6328 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:42.019503    6328 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:42.021675    6328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:47:42.021716    6328 cni.go:84] Creating CNI manager for ""
	I0729 10:47:42.021730    6328 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:42.021752    6328 start.go:340] cluster config:
	{Name:default-k8s-diff-port-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:42.025132    6328 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:42.032503    6328 out.go:177] * Starting "default-k8s-diff-port-371000" primary control-plane node in "default-k8s-diff-port-371000" cluster
	I0729 10:47:42.037442    6328 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:47:42.037457    6328 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:47:42.037473    6328 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:42.037538    6328 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:42.037544    6328 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:47:42.037604    6328 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/default-k8s-diff-port-371000/config.json ...
	I0729 10:47:42.038078    6328 start.go:360] acquireMachinesLock for default-k8s-diff-port-371000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:42.038107    6328 start.go:364] duration metric: took 23.084µs to acquireMachinesLock for "default-k8s-diff-port-371000"
	I0729 10:47:42.038117    6328 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:42.038123    6328 fix.go:54] fixHost starting: 
	I0729 10:47:42.038235    6328 fix.go:112] recreateIfNeeded on default-k8s-diff-port-371000: state=Stopped err=<nil>
	W0729 10:47:42.038244    6328 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:42.041527    6328 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-371000" ...
	I0729 10:47:42.049418    6328 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:42.049455    6328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:59:9f:d0:0e:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2
	I0729 10:47:42.051499    6328 main.go:141] libmachine: STDOUT: 
	I0729 10:47:42.051518    6328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:42.051547    6328 fix.go:56] duration metric: took 13.42425ms for fixHost
	I0729 10:47:42.051552    6328 start.go:83] releasing machines lock for "default-k8s-diff-port-371000", held for 13.440916ms
	W0729 10:47:42.051560    6328 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:42.051596    6328 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:42.051601    6328 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:47.053645    6328 start.go:360] acquireMachinesLock for default-k8s-diff-port-371000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:47.054020    6328 start.go:364] duration metric: took 284.458µs to acquireMachinesLock for "default-k8s-diff-port-371000"
	I0729 10:47:47.054112    6328 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:47.054135    6328 fix.go:54] fixHost starting: 
	I0729 10:47:47.054940    6328 fix.go:112] recreateIfNeeded on default-k8s-diff-port-371000: state=Stopped err=<nil>
	W0729 10:47:47.054966    6328 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:47.064655    6328 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-371000" ...
	I0729 10:47:47.076829    6328 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:47.077051    6328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:59:9f:d0:0e:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/default-k8s-diff-port-371000/disk.qcow2
	I0729 10:47:47.086221    6328 main.go:141] libmachine: STDOUT: 
	I0729 10:47:47.086298    6328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:47.086391    6328 fix.go:56] duration metric: took 32.261166ms for fixHost
	I0729 10:47:47.086409    6328 start.go:83] releasing machines lock for "default-k8s-diff-port-371000", held for 32.366708ms
	W0729 10:47:47.086676    6328 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:47.094645    6328 out.go:177] 
	W0729 10:47:47.097737    6328 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:47.097766    6328 out.go:239] * 
	* 
	W0729 10:47:47.100309    6328 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:47.109592    6328 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-371000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (67.102084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-966000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (33.72125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-966000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-966000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-966000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.307084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-966000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-966000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (28.231958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-966000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (28.728208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-966000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-966000 --alsologtostderr -v=1: exit status 83 (39.805666ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-966000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-966000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:44.305709    6350 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:44.305845    6350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:44.305849    6350 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:44.305851    6350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:44.306007    6350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:44.306202    6350 out.go:298] Setting JSON to false
	I0729 10:47:44.306209    6350 mustload.go:65] Loading cluster: embed-certs-966000
	I0729 10:47:44.306382    6350 config.go:182] Loaded profile config "embed-certs-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:44.310273    6350 out.go:177] * The control-plane node embed-certs-966000 host is not running: state=Stopped
	I0729 10:47:44.314221    6350 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-966000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-966000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (28.708958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (28.241084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.834532792s)

                                                
                                                
-- stdout --
	* [newest-cni-197000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-197000" primary control-plane node in "newest-cni-197000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-197000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:44.618691    6367 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:44.618825    6367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:44.618829    6367 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:44.618831    6367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:44.618960    6367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:44.620127    6367 out.go:298] Setting JSON to false
	I0729 10:47:44.636229    6367 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4628,"bootTime":1722270636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:44.636306    6367 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:44.641204    6367 out.go:177] * [newest-cni-197000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:44.647186    6367 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:44.647252    6367 notify.go:220] Checking for updates...
	I0729 10:47:44.654232    6367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:44.657141    6367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:44.660204    6367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:44.663222    6367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:44.664637    6367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:44.668592    6367 config.go:182] Loaded profile config "default-k8s-diff-port-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:44.668651    6367 config.go:182] Loaded profile config "multinode-937000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:44.668701    6367 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:44.673171    6367 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:47:44.678167    6367 start.go:297] selected driver: qemu2
	I0729 10:47:44.678173    6367 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:47:44.678187    6367 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:44.680357    6367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 10:47:44.680378    6367 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 10:47:44.688162    6367 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:47:44.691311    6367 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 10:47:44.691357    6367 cni.go:84] Creating CNI manager for ""
	I0729 10:47:44.691369    6367 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:44.691374    6367 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:47:44.691401    6367 start.go:340] cluster config:
	{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-197000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:44.695303    6367 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:44.703147    6367 out.go:177] * Starting "newest-cni-197000" primary control-plane node in "newest-cni-197000" cluster
	I0729 10:47:44.707193    6367 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:47:44.707210    6367 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 10:47:44.707222    6367 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:44.707281    6367 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:44.707286    6367 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 10:47:44.707349    6367 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/newest-cni-197000/config.json ...
	I0729 10:47:44.707362    6367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/newest-cni-197000/config.json: {Name:mka34a18a6c63de6cf3f8fe38a9f73395fb4abc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:47:44.707702    6367 start.go:360] acquireMachinesLock for newest-cni-197000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:44.707734    6367 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "newest-cni-197000"
	I0729 10:47:44.707746    6367 start.go:93] Provisioning new machine with config: &{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-197000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:44.707773    6367 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:44.716240    6367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:44.733372    6367 start.go:159] libmachine.API.Create for "newest-cni-197000" (driver="qemu2")
	I0729 10:47:44.733394    6367 client.go:168] LocalClient.Create starting
	I0729 10:47:44.733456    6367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:44.733487    6367 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:44.733496    6367 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:44.733532    6367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:44.733560    6367 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:44.733567    6367 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:44.733901    6367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:44.891956    6367 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:44.924960    6367 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:44.924969    6367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:44.925130    6367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2
	I0729 10:47:44.934146    6367 main.go:141] libmachine: STDOUT: 
	I0729 10:47:44.934163    6367 main.go:141] libmachine: STDERR: 
	I0729 10:47:44.934201    6367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2 +20000M
	I0729 10:47:44.941860    6367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:44.941874    6367 main.go:141] libmachine: STDERR: 
	I0729 10:47:44.941884    6367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2
	I0729 10:47:44.941890    6367 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:44.941904    6367 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:44.941932    6367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:51:38:20:72:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2
	I0729 10:47:44.943510    6367 main.go:141] libmachine: STDOUT: 
	I0729 10:47:44.943523    6367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:44.943540    6367 client.go:171] duration metric: took 210.149583ms to LocalClient.Create
	I0729 10:47:46.945800    6367 start.go:128] duration metric: took 2.23805425s to createHost
	I0729 10:47:46.945868    6367 start.go:83] releasing machines lock for "newest-cni-197000", held for 2.238192125s
	W0729 10:47:46.945917    6367 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:46.961110    6367 out.go:177] * Deleting "newest-cni-197000" in qemu2 ...
	W0729 10:47:46.995270    6367 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:46.995314    6367 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:51.997369    6367 start.go:360] acquireMachinesLock for newest-cni-197000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:51.997924    6367 start.go:364] duration metric: took 429.875µs to acquireMachinesLock for "newest-cni-197000"
	I0729 10:47:51.998066    6367 start.go:93] Provisioning new machine with config: &{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-197000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:51.998397    6367 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:52.003016    6367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:47:52.055868    6367 start.go:159] libmachine.API.Create for "newest-cni-197000" (driver="qemu2")
	I0729 10:47:52.055921    6367 client.go:168] LocalClient.Create starting
	I0729 10:47:52.056067    6367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/ca.pem
	I0729 10:47:52.056145    6367 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:52.056162    6367 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:52.056235    6367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19345-1151/.minikube/certs/cert.pem
	I0729 10:47:52.056280    6367 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:52.056292    6367 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:52.056827    6367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 10:47:52.219748    6367 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:52.364024    6367 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:52.364030    6367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:52.364222    6367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2.raw /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2
	I0729 10:47:52.373882    6367 main.go:141] libmachine: STDOUT: 
	I0729 10:47:52.373907    6367 main.go:141] libmachine: STDERR: 
	I0729 10:47:52.373963    6367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2 +20000M
	I0729 10:47:52.381927    6367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:52.381941    6367 main.go:141] libmachine: STDERR: 
	I0729 10:47:52.381972    6367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2
	I0729 10:47:52.381976    6367 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:52.381984    6367 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:52.382010    6367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e7:6f:37:62:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2
	I0729 10:47:52.383572    6367 main.go:141] libmachine: STDOUT: 
	I0729 10:47:52.383585    6367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:52.383602    6367 client.go:171] duration metric: took 327.685416ms to LocalClient.Create
	I0729 10:47:54.385754    6367 start.go:128] duration metric: took 2.387380334s to createHost
	I0729 10:47:54.385831    6367 start.go:83] releasing machines lock for "newest-cni-197000", held for 2.3879485s
	W0729 10:47:54.386312    6367 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-197000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-197000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:54.395785    6367 out.go:177] 
	W0729 10:47:54.401104    6367 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:54.401141    6367 out.go:239] * 
	* 
	W0729 10:47:54.403571    6367 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:54.413014    6367 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (70.685084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-371000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (31.299333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-371000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-371000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-371000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.744292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-371000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-371000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (28.68325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-371000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (29.140667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-371000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-371000 --alsologtostderr -v=1: exit status 83 (39.095583ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-371000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-371000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:47.374034    6389 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:47.374187    6389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:47.374190    6389 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:47.374192    6389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:47.374323    6389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:47.374548    6389 out.go:298] Setting JSON to false
	I0729 10:47:47.374554    6389 mustload.go:65] Loading cluster: default-k8s-diff-port-371000
	I0729 10:47:47.374728    6389 config.go:182] Loaded profile config "default-k8s-diff-port-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:47.377634    6389 out.go:177] * The control-plane node default-k8s-diff-port-371000 host is not running: state=Stopped
	I0729 10:47:47.381621    6389 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-371000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-371000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (28.030708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (28.919125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.182777042s)

                                                
                                                
-- stdout --
	* [newest-cni-197000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-197000" primary control-plane node in "newest-cni-197000" cluster
	* Restarting existing qemu2 VM for "newest-cni-197000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-197000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:58.055256    6436 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:58.055377    6436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:58.055380    6436 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:58.055383    6436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:58.055528    6436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:47:58.056495    6436 out.go:298] Setting JSON to false
	I0729 10:47:58.072704    6436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4642,"bootTime":1722270636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:47:58.072780    6436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:58.076709    6436 out.go:177] * [newest-cni-197000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:58.083711    6436 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:47:58.083803    6436 notify.go:220] Checking for updates...
	I0729 10:47:58.090648    6436 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:47:58.093708    6436 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:58.096639    6436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:58.099681    6436 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:47:58.102686    6436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:58.104265    6436 config.go:182] Loaded profile config "newest-cni-197000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 10:47:58.104541    6436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:58.108608    6436 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:47:58.115540    6436 start.go:297] selected driver: qemu2
	I0729 10:47:58.115548    6436 start.go:901] validating driver "qemu2" against &{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-197000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:58.115628    6436 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:58.117852    6436 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 10:47:58.117893    6436 cni.go:84] Creating CNI manager for ""
	I0729 10:47:58.117900    6436 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:58.117935    6436 start.go:340] cluster config:
	{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-197000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:58.121354    6436 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:58.128635    6436 out.go:177] * Starting "newest-cni-197000" primary control-plane node in "newest-cni-197000" cluster
	I0729 10:47:58.132653    6436 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:47:58.132670    6436 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 10:47:58.132680    6436 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:58.132745    6436 preload.go:172] Found /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:58.132751    6436 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 10:47:58.132819    6436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/newest-cni-197000/config.json ...
	I0729 10:47:58.133277    6436 start.go:360] acquireMachinesLock for newest-cni-197000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:58.133306    6436 start.go:364] duration metric: took 23.083µs to acquireMachinesLock for "newest-cni-197000"
	I0729 10:47:58.133316    6436 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:47:58.133322    6436 fix.go:54] fixHost starting: 
	I0729 10:47:58.133441    6436 fix.go:112] recreateIfNeeded on newest-cni-197000: state=Stopped err=<nil>
	W0729 10:47:58.133450    6436 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:47:58.136703    6436 out.go:177] * Restarting existing qemu2 VM for "newest-cni-197000" ...
	I0729 10:47:58.144679    6436 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:58.144717    6436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e7:6f:37:62:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2
	I0729 10:47:58.146755    6436 main.go:141] libmachine: STDOUT: 
	I0729 10:47:58.146775    6436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:58.146805    6436 fix.go:56] duration metric: took 13.483791ms for fixHost
	I0729 10:47:58.146809    6436 start.go:83] releasing machines lock for "newest-cni-197000", held for 13.499208ms
	W0729 10:47:58.146816    6436 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:58.146863    6436 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:58.146868    6436 start.go:729] Will try again in 5 seconds ...
	I0729 10:48:03.148888    6436 start.go:360] acquireMachinesLock for newest-cni-197000: {Name:mk01e51d39e704894d50748f48fe698ec9c69c15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:48:03.149485    6436 start.go:364] duration metric: took 442.958µs to acquireMachinesLock for "newest-cni-197000"
	I0729 10:48:03.149657    6436 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:48:03.149677    6436 fix.go:54] fixHost starting: 
	I0729 10:48:03.150409    6436 fix.go:112] recreateIfNeeded on newest-cni-197000: state=Stopped err=<nil>
	W0729 10:48:03.150442    6436 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:48:03.158877    6436 out.go:177] * Restarting existing qemu2 VM for "newest-cni-197000" ...
	I0729 10:48:03.162958    6436 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:48:03.163256    6436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e7:6f:37:62:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19345-1151/.minikube/machines/newest-cni-197000/disk.qcow2
	I0729 10:48:03.172704    6436 main.go:141] libmachine: STDOUT: 
	I0729 10:48:03.173201    6436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:48:03.173269    6436 fix.go:56] duration metric: took 23.595416ms for fixHost
	I0729 10:48:03.173284    6436 start.go:83] releasing machines lock for "newest-cni-197000", held for 23.751541ms
	W0729 10:48:03.173452    6436 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-197000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-197000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:48:03.181007    6436 out.go:177] 
	W0729 10:48:03.185110    6436 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:48:03.185183    6436 out.go:239] * 
	* 
	W0729 10:48:03.187719    6436 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:48:03.196971    6436 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (68.260959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-197000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (30.386125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-197000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-197000 --alsologtostderr -v=1: exit status 83 (41.272833ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-197000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-197000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:48:03.381825    6450 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:48:03.381983    6450 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:48:03.381987    6450 out.go:304] Setting ErrFile to fd 2...
	I0729 10:48:03.381989    6450 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:48:03.382135    6450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:48:03.382358    6450 out.go:298] Setting JSON to false
	I0729 10:48:03.382365    6450 mustload.go:65] Loading cluster: newest-cni-197000
	I0729 10:48:03.382581    6450 config.go:182] Loaded profile config "newest-cni-197000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 10:48:03.386866    6450 out.go:177] * The control-plane node newest-cni-197000 host is not running: state=Stopped
	I0729 10:48:03.390853    6450 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-197000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-197000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (29.749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (29.198375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 14.49
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 10.59
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.12
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 206.55
38 TestAddons/serial/Volcano 37.96
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 12.95
43 TestAddons/parallel/Ingress 18.42
44 TestAddons/parallel/InspektorGadget 10.23
45 TestAddons/parallel/MetricsServer 5.24
48 TestAddons/parallel/CSI 51.61
49 TestAddons/parallel/Headlamp 17.56
50 TestAddons/parallel/CloudSpanner 5.17
51 TestAddons/parallel/LocalPath 51.78
52 TestAddons/parallel/NvidiaDevicePlugin 5.16
53 TestAddons/parallel/Yakd 10.21
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.48
65 TestErrorSpam/setup 34.66
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.68
69 TestErrorSpam/unpause 0.65
70 TestErrorSpam/stop 64.3
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 50.96
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 39.19
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.52
82 TestFunctional/serial/CacheCmd/cache/add_local 1.12
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.63
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
90 TestFunctional/serial/ExtraConfig 37.83
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.67
93 TestFunctional/serial/LogsFileCmd 0.65
94 TestFunctional/serial/InvalidService 4.32
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 8.01
98 TestFunctional/parallel/DryRun 0.22
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.26
105 TestFunctional/parallel/AddonsCmd 0.1
106 TestFunctional/parallel/PersistentVolumeClaim 26.03
108 TestFunctional/parallel/SSHCmd 0.13
109 TestFunctional/parallel/CpCmd 1.27
111 TestFunctional/parallel/FileSync 0.13
112 TestFunctional/parallel/CertSync 0.41
116 TestFunctional/parallel/NodeLabels 0.11
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
120 TestFunctional/parallel/License 0.22
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.08
133 TestFunctional/parallel/ServiceCmd/List 0.28
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
136 TestFunctional/parallel/ServiceCmd/Format 0.1
137 TestFunctional/parallel/ServiceCmd/URL 0.1
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
139 TestFunctional/parallel/ProfileCmd/profile_list 0.12
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
141 TestFunctional/parallel/MountCmd/any-port 5.08
142 TestFunctional/parallel/MountCmd/specific-port 0.86
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
144 TestFunctional/parallel/Version/short 0.04
145 TestFunctional/parallel/Version/components 0.2
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
150 TestFunctional/parallel/ImageCommands/ImageBuild 1.56
151 TestFunctional/parallel/ImageCommands/Setup 1.79
152 TestFunctional/parallel/DockerEnv/bash 0.31
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.48
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
157 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.39
158 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
159 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
160 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
161 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
162 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 199.24
170 TestMultiControlPlane/serial/DeployApp 5.41
171 TestMultiControlPlane/serial/PingHostFromPods 0.76
172 TestMultiControlPlane/serial/AddWorkerNode 59.4
173 TestMultiControlPlane/serial/NodeLabels 0.17
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
175 TestMultiControlPlane/serial/CopyFile 4.33
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.31
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 3.26
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.19
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 0.94
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.31
286 TestNoKubernetes/serial/Stop 3.36
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
305 TestStartStop/group/old-k8s-version/serial/Stop 3.63
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/no-preload/serial/Stop 2.01
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
327 TestStartStop/group/embed-certs/serial/Stop 3.63
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.44
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
347 TestStartStop/group/newest-cni/serial/Stop 3.35
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-221000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-221000: exit status 85 (90.554916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-221000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT |          |
	|         | -p download-only-221000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 09:55:03
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 09:55:03.449255    1650 out.go:291] Setting OutFile to fd 1 ...
	I0729 09:55:03.449403    1650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 09:55:03.449406    1650 out.go:304] Setting ErrFile to fd 2...
	I0729 09:55:03.449408    1650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 09:55:03.449519    1650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	W0729 09:55:03.449610    1650 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19345-1151/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19345-1151/.minikube/config/config.json: no such file or directory
	I0729 09:55:03.451015    1650 out.go:298] Setting JSON to true
	I0729 09:55:03.468323    1650 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1467,"bootTime":1722270636,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 09:55:03.468391    1650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 09:55:03.474145    1650 out.go:97] [download-only-221000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 09:55:03.474328    1650 notify.go:220] Checking for updates...
	W0729 09:55:03.474384    1650 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 09:55:03.477088    1650 out.go:169] MINIKUBE_LOCATION=19345
	I0729 09:55:03.480195    1650 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 09:55:03.485130    1650 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 09:55:03.488138    1650 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 09:55:03.491172    1650 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	W0729 09:55:03.497128    1650 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 09:55:03.497365    1650 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 09:55:03.502116    1650 out.go:97] Using the qemu2 driver based on user configuration
	I0729 09:55:03.502136    1650 start.go:297] selected driver: qemu2
	I0729 09:55:03.502158    1650 start.go:901] validating driver "qemu2" against <nil>
	I0729 09:55:03.502220    1650 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 09:55:03.505037    1650 out.go:169] Automatically selected the socket_vmnet network
	I0729 09:55:03.510785    1650 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 09:55:03.510901    1650 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 09:55:03.510928    1650 cni.go:84] Creating CNI manager for ""
	I0729 09:55:03.510945    1650 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 09:55:03.511006    1650 start.go:340] cluster config:
	{Name:download-only-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 09:55:03.516414    1650 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 09:55:03.519955    1650 out.go:97] Downloading VM boot image ...
	I0729 09:55:03.519974    1650 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 09:55:11.043286    1650 out.go:97] Starting "download-only-221000" primary control-plane node in "download-only-221000" cluster
	I0729 09:55:11.043321    1650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 09:55:11.102724    1650 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 09:55:11.102730    1650 cache.go:56] Caching tarball of preloaded images
	I0729 09:55:11.102895    1650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 09:55:11.108007    1650 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 09:55:11.108014    1650 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:11.188371    1650 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 09:55:23.142267    1650 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:23.142422    1650 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:23.836605    1650 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 09:55:23.836795    1650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/download-only-221000/config.json ...
	I0729 09:55:23.836814    1650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/download-only-221000/config.json: {Name:mk7d7e1298c35a725f1a1a40593756d5303c6732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 09:55:23.837617    1650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 09:55:23.837867    1650 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 09:55:24.226915    1650 out.go:169] 
	W0729 09:55:24.230929    1650 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19345-1151/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60 0x108d11a60] Decompressors:map[bz2:0x1400068c0f0 gz:0x1400068c0f8 tar:0x1400068c0a0 tar.bz2:0x1400068c0b0 tar.gz:0x1400068c0c0 tar.xz:0x1400068c0d0 tar.zst:0x1400068c0e0 tbz2:0x1400068c0b0 tgz:0x1400068c0c0 txz:0x1400068c0d0 tzst:0x1400068c0e0 xz:0x1400068c100 zip:0x1400068c110 zst:0x1400068c108] Getters:map[file:0x140002ba730 http:0x14000bc4280 https:0x14000bc42d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 09:55:24.230952    1650 out_reason.go:110] 
	W0729 09:55:24.237914    1650 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 09:55:24.241895    1650 out.go:169] 
	
	
	* The control-plane node download-only-221000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-221000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-221000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (14.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-826000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-826000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (14.493514125s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (14.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-826000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-826000: exit status 85 (73.276417ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-221000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT |                     |
	|         | -p download-only-221000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT | 29 Jul 24 09:55 PDT |
	| delete  | -p download-only-221000        | download-only-221000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT | 29 Jul 24 09:55 PDT |
	| start   | -o=json --download-only        | download-only-826000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT |                     |
	|         | -p download-only-826000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 09:55:24
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 09:55:24.645563    1674 out.go:291] Setting OutFile to fd 1 ...
	I0729 09:55:24.645718    1674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 09:55:24.645721    1674 out.go:304] Setting ErrFile to fd 2...
	I0729 09:55:24.645724    1674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 09:55:24.645843    1674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 09:55:24.646869    1674 out.go:298] Setting JSON to true
	I0729 09:55:24.662952    1674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1488,"bootTime":1722270636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 09:55:24.663023    1674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 09:55:24.666481    1674 out.go:97] [download-only-826000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 09:55:24.666588    1674 notify.go:220] Checking for updates...
	I0729 09:55:24.670323    1674 out.go:169] MINIKUBE_LOCATION=19345
	I0729 09:55:24.673385    1674 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 09:55:24.677379    1674 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 09:55:24.678867    1674 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 09:55:24.682335    1674 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	W0729 09:55:24.688346    1674 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 09:55:24.688509    1674 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 09:55:24.691257    1674 out.go:97] Using the qemu2 driver based on user configuration
	I0729 09:55:24.691266    1674 start.go:297] selected driver: qemu2
	I0729 09:55:24.691270    1674 start.go:901] validating driver "qemu2" against <nil>
	I0729 09:55:24.691324    1674 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 09:55:24.694369    1674 out.go:169] Automatically selected the socket_vmnet network
	I0729 09:55:24.699585    1674 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 09:55:24.699678    1674 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 09:55:24.699730    1674 cni.go:84] Creating CNI manager for ""
	I0729 09:55:24.699739    1674 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 09:55:24.699744    1674 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 09:55:24.699784    1674 start.go:340] cluster config:
	{Name:download-only-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-826000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 09:55:24.703145    1674 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 09:55:24.706310    1674 out.go:97] Starting "download-only-826000" primary control-plane node in "download-only-826000" cluster
	I0729 09:55:24.706319    1674 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 09:55:24.760730    1674 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 09:55:24.760744    1674 cache.go:56] Caching tarball of preloaded images
	I0729 09:55:24.760895    1674 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 09:55:24.764127    1674 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 09:55:24.764135    1674 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:24.842466    1674 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-826000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-826000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-826000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (10.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-076000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-076000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (10.589303084s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (10.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-076000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-076000: exit status 85 (79.183333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-221000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT |                     |
	|         | -p download-only-221000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT | 29 Jul 24 09:55 PDT |
	| delete  | -p download-only-221000             | download-only-221000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT | 29 Jul 24 09:55 PDT |
	| start   | -o=json --download-only             | download-only-826000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT |                     |
	|         | -p download-only-826000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT | 29 Jul 24 09:55 PDT |
	| delete  | -p download-only-826000             | download-only-826000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT | 29 Jul 24 09:55 PDT |
	| start   | -o=json --download-only             | download-only-076000 | jenkins | v1.33.1 | 29 Jul 24 09:55 PDT |                     |
	|         | -p download-only-076000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 09:55:39
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 09:55:39.424423    1698 out.go:291] Setting OutFile to fd 1 ...
	I0729 09:55:39.424563    1698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 09:55:39.424567    1698 out.go:304] Setting ErrFile to fd 2...
	I0729 09:55:39.424570    1698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 09:55:39.424708    1698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 09:55:39.425708    1698 out.go:298] Setting JSON to true
	I0729 09:55:39.441808    1698 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1503,"bootTime":1722270636,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 09:55:39.441870    1698 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 09:55:39.446268    1698 out.go:97] [download-only-076000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 09:55:39.446380    1698 notify.go:220] Checking for updates...
	I0729 09:55:39.450217    1698 out.go:169] MINIKUBE_LOCATION=19345
	I0729 09:55:39.453164    1698 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 09:55:39.457145    1698 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 09:55:39.460212    1698 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 09:55:39.463225    1698 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	W0729 09:55:39.469164    1698 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 09:55:39.469303    1698 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 09:55:39.472160    1698 out.go:97] Using the qemu2 driver based on user configuration
	I0729 09:55:39.472172    1698 start.go:297] selected driver: qemu2
	I0729 09:55:39.472178    1698 start.go:901] validating driver "qemu2" against <nil>
	I0729 09:55:39.472249    1698 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 09:55:39.475183    1698 out.go:169] Automatically selected the socket_vmnet network
	I0729 09:55:39.480347    1698 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 09:55:39.480428    1698 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 09:55:39.480458    1698 cni.go:84] Creating CNI manager for ""
	I0729 09:55:39.480466    1698 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 09:55:39.480471    1698 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 09:55:39.480502    1698 start.go:340] cluster config:
	{Name:download-only-076000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-076000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 09:55:39.484009    1698 iso.go:125] acquiring lock: {Name:mk3d2e4bccb82483c5680eeff1ee97ecdcfde798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 09:55:39.485439    1698 out.go:97] Starting "download-only-076000" primary control-plane node in "download-only-076000" cluster
	I0729 09:55:39.485448    1698 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 09:55:39.543629    1698 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 09:55:39.543657    1698 cache.go:56] Caching tarball of preloaded images
	I0729 09:55:39.543833    1698 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 09:55:39.549063    1698 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 09:55:39.549084    1698 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:39.629368    1698 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 09:55:47.812821    1698 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:47.812951    1698 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 09:55:48.332132    1698 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 09:55:48.332345    1698 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/download-only-076000/config.json ...
	I0729 09:55:48.332362    1698 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/download-only-076000/config.json: {Name:mk50a020a704fad3d007252ba4aa9f03ce30c45f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 09:55:48.332586    1698 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 09:55:48.332699    1698 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19345-1151/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-076000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-076000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-076000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-025000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-025000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-378000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-378000: exit status 85 (58.255917ms)

                                                
                                                
-- stdout --
	* Profile "addons-378000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-378000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-378000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-378000: exit status 85 (54.169917ms)

                                                
                                                
-- stdout --
	* Profile "addons-378000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-378000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (206.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-378000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-378000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m26.554403292s)
--- PASS: TestAddons/Setup (206.55s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.528625ms
addons_test.go:897: volcano-scheduler stabilized in 7.551458ms
addons_test.go:905: volcano-admission stabilized in 7.571792ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-xmgzd" [80304ff1-b807-4161-98b2-a3e1fac5b399] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003768875s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-4pq8m" [d768be59-aad3-4be1-bc22-40a893fb9875] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003649166s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-vm8ds" [624a0b44-98f3-498b-968f-dd34278c51e3] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003556792s
addons_test.go:932: (dbg) Run:  kubectl --context addons-378000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-378000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-378000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c5afbc4f-5abd-4692-9481-3f5352d61a7b] Pending
helpers_test.go:344: "test-job-nginx-0" [c5afbc4f-5abd-4692-9481-3f5352d61a7b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c5afbc4f-5abd-4692-9481-3f5352d61a7b] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003768417s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-378000 addons disable volcano --alsologtostderr -v=1: (9.703799875s)
--- PASS: TestAddons/serial/Volcano (37.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-378000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-378000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.230208ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-rc4kb" [aa9b3c36-2ea8-4463-a6a0-43c77c5e705e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004880166s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rhvxb" [941e34a5-0e7f-44d6-b256-98d713ec85dd] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004659542s
addons_test.go:342: (dbg) Run:  kubectl --context addons-378000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-378000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-378000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.677794542s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 ip
2024/07/29 10:00:23 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (12.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-378000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-378000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-378000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [358e42c5-931d-4331-985e-1d05ae5aed37] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [358e42c5-931d-4331-985e-1d05ae5aed37] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003069917s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-378000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-378000 addons disable ingress --alsologtostderr -v=1: (7.204954209s)
--- PASS: TestAddons/parallel/Ingress (18.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bjxt8" [0b797ea1-6024-445c-97a7-2e05522be66e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003962s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-378000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-378000: (5.225002875s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.521791ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-wxsnp" [49e03c98-210c-4e1f-905a-988d98bd6f2f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002876084s
addons_test.go:417: (dbg) Run:  kubectl --context addons-378000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.09675ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-378000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-378000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [64f50cc0-05fe-4622-84d7-0ea0756cfdb9] Pending
helpers_test.go:344: "task-pv-pod" [64f50cc0-05fe-4622-84d7-0ea0756cfdb9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [64f50cc0-05fe-4622-84d7-0ea0756cfdb9] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002854209s
addons_test.go:590: (dbg) Run:  kubectl --context addons-378000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-378000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-378000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-378000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-378000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-378000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-378000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [28c40909-20c4-4951-808b-6c3d1a9454f7] Pending
helpers_test.go:344: "task-pv-pod-restore" [28c40909-20c4-4951-808b-6c3d1a9454f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [28c40909-20c4-4951-808b-6c3d1a9454f7] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00396575s
addons_test.go:632: (dbg) Run:  kubectl --context addons-378000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-378000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-378000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-378000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.08701325s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-378000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-5wrln" [24f268b0-fcd7-4cae-bccb-e76588a57353] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-5wrln" [24f268b0-fcd7-4cae-bccb-e76588a57353] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003888083s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-378000 addons disable headlamp --alsologtostderr -v=1: (5.211244583s)
--- PASS: TestAddons/parallel/Headlamp (17.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-jcdpz" [c0d69b9b-4721-40ab-870f-8996aa2410ae] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003728333s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-378000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-378000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-378000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [500020a6-6d2a-4fd6-8351-3bb123c5ea82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [500020a6-6d2a-4fd6-8351-3bb123c5ea82] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [500020a6-6d2a-4fd6-8351-3bb123c5ea82] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003926s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-378000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 ssh "cat /opt/local-path-provisioner/pvc-c4cd271c-c514-4375-84a1-408a5ec56e85_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-378000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-378000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-378000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.327238084s)
--- PASS: TestAddons/parallel/LocalPath (51.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7vpr7" [9b382c2b-2d7b-44d7-b33f-e30458260516] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004056625s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-378000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-s7mfz" [8104df37-8792-462c-b9d5-9ae1ae4d43d9] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004430334s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-378000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-378000 addons disable yakd --alsologtostderr -v=1: (5.205833834s)
--- PASS: TestAddons/parallel/Yakd (10.21s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-378000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-378000: (12.204922625s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-378000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-378000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-378000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.48s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.48s)

                                                
                                    
x
+
TestErrorSpam/setup (34.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-073000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-073000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 --driver=qemu2 : (34.662697916s)
--- PASS: TestErrorSpam/setup (34.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 unpause
--- PASS: TestErrorSpam/unpause (0.65s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 stop: (12.201966875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 stop: (26.069681166s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-073000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-073000 stop: (26.023981333s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19345-1151/.minikube/files/etc/test/nested/copy/1648/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0729 10:04:17.511338    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:17.518123    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:17.530207    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:17.552263    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:17.592326    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:17.674378    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:17.836431    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:18.158488    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:18.800629    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:20.082797    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:22.644892    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:27.766853    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-398000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (50.95814425s)
--- PASS: TestFunctional/serial/StartWithProxy (50.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --alsologtostderr -v=8
E0729 10:04:38.008808    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:04:58.490407    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-398000 --alsologtostderr -v=8: (39.192162875s)
functional_test.go:659: soft start took 39.192551792s for "functional-398000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-398000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3130854342/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache add minikube-local-cache-test:functional-398000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache delete minikube-local-cache-test:functional-398000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-398000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.443833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 kubectl -- --context functional-398000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-398000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 10:05:39.451365    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-398000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.834628584s)
functional_test.go:757: restart took 37.834782083s for "functional-398000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-398000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1158436495/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-398000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-398000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-398000: exit status 115 (101.055042ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31779 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-398000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-398000 delete -f testdata/invalidsvc.yaml: (1.1211175s)
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 config get cpus: exit status 14 (29.5195ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 config get cpus: exit status 14 (29.703833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-398000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-398000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2598: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-398000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.736083ms)

                                                
                                                
-- stdout --
	* [functional-398000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:06:46.114938    2585 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:06:46.115067    2585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:06:46.115070    2585 out.go:304] Setting ErrFile to fd 2...
	I0729 10:06:46.115072    2585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:06:46.115204    2585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:06:46.116251    2585 out.go:298] Setting JSON to false
	I0729 10:06:46.132726    2585 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2170,"bootTime":1722270636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:06:46.132806    2585 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:06:46.138036    2585 out.go:177] * [functional-398000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:06:46.145002    2585 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:06:46.145103    2585 notify.go:220] Checking for updates...
	I0729 10:06:46.152058    2585 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:06:46.155081    2585 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:06:46.158071    2585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:06:46.161095    2585 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:06:46.164008    2585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:06:46.167330    2585 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:06:46.167583    2585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:06:46.172110    2585 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:06:46.179063    2585 start.go:297] selected driver: qemu2
	I0729 10:06:46.179070    2585 start.go:901] validating driver "qemu2" against &{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:06:46.179146    2585 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:06:46.185069    2585 out.go:177] 
	W0729 10:06:46.188957    2585 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 10:06:46.193072    2585 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-398000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.066208ms)

                                                
                                                
-- stdout --
	* [functional-398000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:06:45.998689    2581 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:06:45.998790    2581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:06:45.998794    2581 out.go:304] Setting ErrFile to fd 2...
	I0729 10:06:45.998797    2581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:06:45.998932    2581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
	I0729 10:06:46.000464    2581 out.go:298] Setting JSON to false
	I0729 10:06:46.017508    2581 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2170,"bootTime":1722270636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 10:06:46.017593    2581 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:06:46.023147    2581 out.go:177] * [functional-398000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 10:06:46.031044    2581 out.go:177]   - MINIKUBE_LOCATION=19345
	I0729 10:06:46.031130    2581 notify.go:220] Checking for updates...
	I0729 10:06:46.038920    2581 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	I0729 10:06:46.042036    2581 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:06:46.045075    2581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:06:46.048035    2581 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	I0729 10:06:46.051071    2581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:06:46.054312    2581 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:06:46.054547    2581 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:06:46.057021    2581 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 10:06:46.064063    2581 start.go:297] selected driver: qemu2
	I0729 10:06:46.064069    2581 start.go:901] validating driver "qemu2" against &{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:06:46.064115    2581 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:06:46.070043    2581 out.go:177] 
	W0729 10:06:46.074062    2581 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 10:06:46.078084    2581 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bbd11c55-4d4e-4b5e-915d-3e0a9643c5f9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004108083s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-398000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-398000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-398000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-398000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4aa7e3c7-55ef-4235-9015-8bc4e9a1bcea] Pending
helpers_test.go:344: "sp-pod" [4aa7e3c7-55ef-4235-9015-8bc4e9a1bcea] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4aa7e3c7-55ef-4235-9015-8bc4e9a1bcea] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.002740875s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-398000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-398000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-398000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3b87d562-e558-45fc-bfdf-f34035f108b3] Pending
helpers_test.go:344: "sp-pod" [3b87d562-e558-45fc-bfdf-f34035f108b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3b87d562-e558-45fc-bfdf-f34035f108b3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003210458s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-398000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -n functional-398000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cp functional-398000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1260329952/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -n functional-398000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -n functional-398000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1648/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/test/nested/copy/1648/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1648.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/ssl/certs/1648.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1648.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /usr/share/ca-certificates/1648.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/ssl/certs/16482.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /usr/share/ca-certificates/16482.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-398000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "sudo systemctl is-active crio": exit status 1 (79.385875ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2440: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-398000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5a7be7db-5c1b-45ab-88fd-a7479d0ebbdf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5a7be7db-5c1b-45ab-88fd-a7479d0ebbdf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003805208s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-398000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.166.230 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-398000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-398000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-8xsw8" [110e2cb1-e1db-4261-9072-eb3969baa002] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-8xsw8" [110e2cb1-e1db-4261-9072-eb3969baa002] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003868333s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service list -o json
functional_test.go:1490: Took "278.735125ms" to run "out/minikube-darwin-arm64 -p functional-398000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:31640
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:31640
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "83.077583ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.625708ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "85.055584ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.613458ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3890308663/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722272797659490000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3890308663/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722272797659490000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3890308663/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722272797659490000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3890308663/001/test-1722272797659490000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.257042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 17:06 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 17:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 17:06 test-1722272797659490000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh cat /mount-9p/test-1722272797659490000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-398000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [671d1ef6-f1c6-4d78-ab80-98852030666b] Pending
helpers_test.go:344: "busybox-mount" [671d1ef6-f1c6-4d78-ab80-98852030666b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [671d1ef6-f1c6-4d78-ab80-98852030666b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [671d1ef6-f1c6-4d78-ab80-98852030666b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.001929042s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-398000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3890308663/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port796730148/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.388541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port796730148/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "sudo umount -f /mount-9p": exit status 1 (62.640958ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-398000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port796730148/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount1: exit status 1 (64.626292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount1: exit status 1 (60.131458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-398000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3249420161/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-398000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-398000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image ls --format short --alsologtostderr:
I0729 10:06:57.805016    2740 out.go:291] Setting OutFile to fd 1 ...
I0729 10:06:57.805186    2740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.805190    2740 out.go:304] Setting ErrFile to fd 2...
I0729 10:06:57.805193    2740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.805328    2740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
I0729 10:06:57.805800    2740 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.805862    2740 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.806939    2740 ssh_runner.go:195] Run: systemctl --version
I0729 10:06:57.806946    2740 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/functional-398000/id_rsa Username:docker}
I0729 10:06:57.835011    2740 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-398000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-398000 | b6269232839bf | 30B    |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image ls --format table --alsologtostderr:
I0729 10:06:57.962893    2749 out.go:291] Setting OutFile to fd 1 ...
I0729 10:06:57.963033    2749 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.963037    2749 out.go:304] Setting ErrFile to fd 2...
I0729 10:06:57.963039    2749 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.963173    2749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
I0729 10:06:57.963598    2749 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.963662    2749 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.964393    2749 ssh_runner.go:195] Run: systemctl --version
I0729 10:06:57.964401    2749 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/functional-398000/id_rsa Username:docker}
I0729 10:06:57.991712    2749 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image ls --format json --alsologtostderr:
[{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b6269232839bf94f3bdc63242cc91cefaf0a1a898ca36537909fbad8d35d3c70","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-398000"],"size":"30"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"014faa467e29798aeef733fe6d1a3
b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"re
poTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-398000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"
size":"3550000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image ls --format json --alsologtostderr:
I0729 10:06:57.891433    2745 out.go:291] Setting OutFile to fd 1 ...
I0729 10:06:57.891571    2745 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.891575    2745 out.go:304] Setting ErrFile to fd 2...
I0729 10:06:57.891577    2745 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.891703    2745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
I0729 10:06:57.892156    2745 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.892218    2745 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.892928    2745 ssh_runner.go:195] Run: systemctl --version
I0729 10:06:57.892936    2745 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/functional-398000/id_rsa Username:docker}
I0729 10:06:57.919930    2745 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: b6269232839bf94f3bdc63242cc91cefaf0a1a898ca36537909fbad8d35d3c70
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-398000
size: "30"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-398000
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image ls --format yaml --alsologtostderr:
I0729 10:06:57.805030    2741 out.go:291] Setting OutFile to fd 1 ...
I0729 10:06:57.805176    2741 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.805179    2741 out.go:304] Setting ErrFile to fd 2...
I0729 10:06:57.805182    2741 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.805333    2741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
I0729 10:06:57.805727    2741 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.805784    2741 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.806588    2741 ssh_runner.go:195] Run: systemctl --version
I0729 10:06:57.806596    2741 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/functional-398000/id_rsa Username:docker}
I0729 10:06:57.834283    2741 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh pgrep buildkitd: exit status 1 (61.404834ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image build -t localhost/my-image:functional-398000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-398000 image build -t localhost/my-image:functional-398000 testdata/build --alsologtostderr: (1.422752083s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image build -t localhost/my-image:functional-398000 testdata/build --alsologtostderr:
I0729 10:06:57.950578    2748 out.go:291] Setting OutFile to fd 1 ...
I0729 10:06:57.950851    2748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.950855    2748 out.go:304] Setting ErrFile to fd 2...
I0729 10:06:57.950857    2748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:06:57.950995    2748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19345-1151/.minikube/bin
I0729 10:06:57.951459    2748 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.952303    2748 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:06:57.953114    2748 ssh_runner.go:195] Run: systemctl --version
I0729 10:06:57.953123    2748 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19345-1151/.minikube/machines/functional-398000/id_rsa Username:docker}
I0729 10:06:57.979837    2748 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3172136054.tar
I0729 10:06:57.979886    2748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 10:06:57.983916    2748 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3172136054.tar
I0729 10:06:57.985479    2748 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3172136054.tar: stat -c "%s %y" /var/lib/minikube/build/build.3172136054.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3172136054.tar': No such file or directory
I0729 10:06:57.985493    2748 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3172136054.tar --> /var/lib/minikube/build/build.3172136054.tar (3072 bytes)
I0729 10:06:57.994661    2748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3172136054
I0729 10:06:57.998547    2748 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3172136054 -xf /var/lib/minikube/build/build.3172136054.tar
I0729 10:06:58.004104    2748 docker.go:360] Building image: /var/lib/minikube/build/build.3172136054
I0729 10:06:58.004146    2748 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-398000 /var/lib/minikube/build/build.3172136054
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0e0022600213045a034d5c92efd6fa4ba8b8569d267e1133c7c37ead57686f40 done
#8 naming to localhost/my-image:functional-398000 done
#8 DONE 0.0s
I0729 10:06:59.327207    2748 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-398000 /var/lib/minikube/build/build.3172136054: (1.323088583s)
I0729 10:06:59.327279    2748 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3172136054
I0729 10:06:59.331217    2748 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3172136054.tar
I0729 10:06:59.334591    2748 build_images.go:217] Built localhost/my-image:functional-398000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3172136054.tar
I0729 10:06:59.334608    2748 build_images.go:133] succeeded building to: functional-398000
I0729 10:06:59.334613    2748 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
2024/07/29 10:06:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.77213375s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-398000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-398000 docker-env) && out/minikube-darwin-arm64 status -p functional-398000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-398000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image load --daemon docker.io/kicbase/echo-server:functional-398000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image load --daemon docker.io/kicbase/echo-server:functional-398000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-398000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image load --daemon docker.io/kicbase/echo-server:functional-398000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image save docker.io/kicbase/echo-server:functional-398000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image rm docker.io/kicbase/echo-server:functional-398000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-398000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image save --daemon docker.io/kicbase/echo-server:functional-398000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-398000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-398000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-398000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-398000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-011000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0729 10:07:01.371653    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:09:17.502259    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:09:45.208941    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-011000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m19.053848959s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-011000 -- rollout status deployment/busybox: (3.875794375s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-9drhw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-fn78g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-r8l97 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-9drhw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-fn78g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-r8l97 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-9drhw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-fn78g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-r8l97 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-9drhw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-9drhw -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-fn78g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-fn78g -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-r8l97 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-011000 -- exec busybox-fc5497c4f-r8l97 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-011000 -v=7 --alsologtostderr
E0729 10:11:03.631555    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:03.637534    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:03.648357    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:03.668783    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:03.709595    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:03.791596    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:03.953730    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:04.274287    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:04.916517    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:06.198346    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:08.760414    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:13.882134    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0729 10:11:24.123953    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-011000 -v=7 --alsologtostderr: (59.17775025s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-011000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp testdata/cp-test.txt ha-011000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1144905994/001/cp-test_ha-011000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000:/home/docker/cp-test.txt ha-011000-m02:/home/docker/cp-test_ha-011000_ha-011000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m02 "sudo cat /home/docker/cp-test_ha-011000_ha-011000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000:/home/docker/cp-test.txt ha-011000-m03:/home/docker/cp-test_ha-011000_ha-011000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m03 "sudo cat /home/docker/cp-test_ha-011000_ha-011000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000:/home/docker/cp-test.txt ha-011000-m04:/home/docker/cp-test_ha-011000_ha-011000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m04 "sudo cat /home/docker/cp-test_ha-011000_ha-011000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp testdata/cp-test.txt ha-011000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1144905994/001/cp-test_ha-011000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m02:/home/docker/cp-test.txt ha-011000:/home/docker/cp-test_ha-011000-m02_ha-011000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000 "sudo cat /home/docker/cp-test_ha-011000-m02_ha-011000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m02:/home/docker/cp-test.txt ha-011000-m03:/home/docker/cp-test_ha-011000-m02_ha-011000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m03 "sudo cat /home/docker/cp-test_ha-011000-m02_ha-011000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m02:/home/docker/cp-test.txt ha-011000-m04:/home/docker/cp-test_ha-011000-m02_ha-011000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m04 "sudo cat /home/docker/cp-test_ha-011000-m02_ha-011000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp testdata/cp-test.txt ha-011000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1144905994/001/cp-test_ha-011000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m03:/home/docker/cp-test.txt ha-011000:/home/docker/cp-test_ha-011000-m03_ha-011000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000 "sudo cat /home/docker/cp-test_ha-011000-m03_ha-011000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m03:/home/docker/cp-test.txt ha-011000-m02:/home/docker/cp-test_ha-011000-m03_ha-011000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m02 "sudo cat /home/docker/cp-test_ha-011000-m03_ha-011000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m03:/home/docker/cp-test.txt ha-011000-m04:/home/docker/cp-test_ha-011000-m03_ha-011000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m04 "sudo cat /home/docker/cp-test_ha-011000-m03_ha-011000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp testdata/cp-test.txt ha-011000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1144905994/001/cp-test_ha-011000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m04:/home/docker/cp-test.txt ha-011000:/home/docker/cp-test_ha-011000-m04_ha-011000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000 "sudo cat /home/docker/cp-test_ha-011000-m04_ha-011000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m04:/home/docker/cp-test.txt ha-011000-m02:/home/docker/cp-test_ha-011000-m04_ha-011000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m02 "sudo cat /home/docker/cp-test_ha-011000-m04_ha-011000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 cp ha-011000-m04:/home/docker/cp-test.txt ha-011000-m03:/home/docker/cp-test_ha-011000-m04_ha-011000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-011000 ssh -n ha-011000-m03 "sudo cat /home/docker/cp-test_ha-011000-m04_ha-011000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0729 10:20:40.449232    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
E0729 10:21:03.511877    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/functional-398000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.3121765s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.26s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-137000 --output=json --user=testUser
E0729 10:29:17.356232    1648 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19345-1151/.minikube/profiles/addons-378000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-137000 --output=json --user=testUser: (3.259349375s)
--- PASS: TestJSONOutput/stop/Command (3.26s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-054000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-054000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.818708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f23f8e4b-4458-452e-a8e7-8e6eae3047c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-054000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad9595aa-6b72-4269-a55e-9e3f44eeab04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19345"}}
	{"specversion":"1.0","id":"de7ef04e-0500-49eb-9f5d-bf915de43bb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig"}}
	{"specversion":"1.0","id":"445e8a47-050e-408b-bfcc-14efecd5f38d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"594c2464-c23e-4c39-b4a9-cc07eb37b17f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"30e00b20-432b-45aa-a0ff-be0e08ea3fdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube"}}
	{"specversion":"1.0","id":"1c62cfd2-3f16-431c-b56d-fb8499246c0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b9c0d0bf-80b8-4896-af39-23b5c3b65fd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-054000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-054000
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-615000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-615000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.737458ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19345-1151/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19345-1151/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-615000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-615000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.484875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-615000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-615000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.562198084s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.747738125s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-615000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-615000: (3.359944709s)
--- PASS: TestNoKubernetes/serial/Stop (3.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-615000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-615000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.050458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-615000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-615000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-396000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-670000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-670000 --alsologtostderr -v=3: (3.634531209s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-670000 -n old-k8s-version-670000: exit status 7 (57.657166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-670000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-143000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-143000 --alsologtostderr -v=3: (2.005753125s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (52.584625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-143000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-966000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-966000 --alsologtostderr -v=3: (3.624981958s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-371000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-371000 --alsologtostderr -v=3: (3.440725625s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-966000 -n embed-certs-966000: exit status 7 (58.555333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-966000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-371000 -n default-k8s-diff-port-371000: exit status 7 (57.975083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-371000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-197000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-197000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-197000 --alsologtostderr -v=3: (3.346398917s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (54.824083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-197000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-783000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                
----------------------- debugLogs end: cilium-783000 [took: 2.181650209s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-783000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-783000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-395000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-395000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard