Test Report: QEMU_macOS 19312

                    
                      c58167e77f3b0efe0c3c561ff8e0552b34c41906:2024-07-21:35447
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.79
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.99
55 TestCertOptions 10.1
56 TestCertExpiration 195.23
57 TestDockerFlags 10.17
58 TestForceSystemdFlag 10.05
59 TestForceSystemdEnv 10.74
104 TestFunctional/parallel/ServiceCmdConnect 35.31
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 102.77
178 TestMultiControlPlane/serial/RestartSecondaryNode 209.01
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.42
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.03
183 TestMultiControlPlane/serial/StopCluster 202.1
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 10.17
193 TestJSONOutput/start/Command 9.93
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.05
222 TestMinikubeProfile 10.13
225 TestMountStart/serial/StartWithMountFirst 10.01
228 TestMultiNode/serial/FreshStart2Nodes 9.78
229 TestMultiNode/serial/DeployApp2Nodes 74.8
230 TestMultiNode/serial/PingHostFrom2Pods 0.09
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.07
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.13
236 TestMultiNode/serial/StartAfterStop 49.53
237 TestMultiNode/serial/RestartKeepsNodes 8.19
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.64
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20.13
245 TestPreload 9.98
247 TestScheduledStopUnix 9.98
248 TestSkaffold 12.29
251 TestRunningBinaryUpgrade 600.45
253 TestKubernetesUpgrade 18.04
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.71
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.31
269 TestStoppedBinaryUpgrade/Upgrade 580.86
271 TestPause/serial/Start 9.86
281 TestNoKubernetes/serial/StartWithK8s 9.92
282 TestNoKubernetes/serial/StartWithStopK8s 5.31
283 TestNoKubernetes/serial/Start 5.28
287 TestNoKubernetes/serial/StartNoArgs 5.32
289 TestNetworkPlugins/group/auto/Start 9.94
290 TestNetworkPlugins/group/calico/Start 9.8
291 TestNetworkPlugins/group/custom-flannel/Start 9.88
292 TestNetworkPlugins/group/false/Start 9.94
293 TestNetworkPlugins/group/kindnet/Start 9.83
294 TestNetworkPlugins/group/flannel/Start 9.73
295 TestNetworkPlugins/group/enable-default-cni/Start 10.03
296 TestNetworkPlugins/group/bridge/Start 9.82
297 TestNetworkPlugins/group/kubenet/Start 9.81
300 TestStartStop/group/old-k8s-version/serial/FirstStart 9.93
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/no-preload/serial/FirstStart 9.85
312 TestStartStop/group/no-preload/serial/DeployApp 0.09
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/embed-certs/serial/FirstStart 9.85
318 TestStartStop/group/no-preload/serial/SecondStart 6.11
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
322 TestStartStop/group/no-preload/serial/Pause 0.1
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.3
325 TestStartStop/group/embed-certs/serial/DeployApp 0.1
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
329 TestStartStop/group/embed-certs/serial/SecondStart 5.38
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
335 TestStartStop/group/embed-certs/serial/Pause 0.11
338 TestStartStop/group/newest-cni/serial/FirstStart 9.82
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.92
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
349 TestStartStop/group/newest-cni/serial/SecondStart 5.25
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (13.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-504000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-504000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.792670584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c33f7399-1d10-4e5e-9145-3b4674792f03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-504000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2827a10-ad0e-4a8e-898a-54e4ef35414e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"c6afa7d7-b0f2-4275-b2a7-8c24d22d485e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig"}}
	{"specversion":"1.0","id":"e5bfe903-a8a7-4222-81b1-5932d30cf453","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1b1801c5-58f1-4ee6-a773-09096d83c9bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"88420b1d-568d-4171-9769-419a6700386c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube"}}
	{"specversion":"1.0","id":"67a52168-b423-443e-9f19-562dc6c765bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"17e6ee61-6c98-4d41-8fc1-3d3e203b7de5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f05e000a-d541-4051-9a0b-e606b99edb45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"34eb2121-d884-4600-a891-9302bfc84819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c278905d-1c8d-4007-800d-6d49949e918f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-504000\" primary control-plane node in \"download-only-504000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"445716b9-e0c9-4406-90b3-b2b51e278438","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cfe5290c-81d2-433c-a7b2-43ee1793e1ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60] Decompressors:map[bz2:0x14000702ba0 gz:0x14000702ba8 tar:0x14000702ac0 tar.bz2:0x14000702ad0 tar.gz:0x14000702b30 tar.xz:0x14000702b40 tar.zst:0x14000702b80 tbz2:0x14000702ad0 tgz:0x14
000702b30 txz:0x14000702b40 tzst:0x14000702b80 xz:0x14000702be0 zip:0x14000702c10 zst:0x14000702be8] Getters:map[file:0x140014185a0 http:0x140007e01e0 https:0x140007e0230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"4ed4e6f7-46d0-4659-9498-74a85a5b3e1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:23:49.499317    1915 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:23:49.499441    1915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:23:49.499444    1915 out.go:304] Setting ErrFile to fd 2...
	I0721 16:23:49.499447    1915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:23:49.499580    1915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	W0721 16:23:49.499659    1915 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19312-1409/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19312-1409/.minikube/config/config.json: no such file or directory
	I0721 16:23:49.500846    1915 out.go:298] Setting JSON to true
	I0721 16:23:49.518382    1915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1392,"bootTime":1721602837,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 16:23:49.518449    1915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:23:49.521786    1915 out.go:97] [download-only-504000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 16:23:49.521938    1915 notify.go:220] Checking for updates...
	W0721 16:23:49.521972    1915 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball: no such file or directory
	I0721 16:23:49.524742    1915 out.go:169] MINIKUBE_LOCATION=19312
	I0721 16:23:49.527758    1915 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 16:23:49.531688    1915 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 16:23:49.534744    1915 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:23:49.537774    1915 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	W0721 16:23:49.541757    1915 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 16:23:49.542032    1915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:23:49.545780    1915 out.go:97] Using the qemu2 driver based on user configuration
	I0721 16:23:49.545797    1915 start.go:297] selected driver: qemu2
	I0721 16:23:49.545810    1915 start.go:901] validating driver "qemu2" against <nil>
	I0721 16:23:49.545872    1915 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 16:23:49.548742    1915 out.go:169] Automatically selected the socket_vmnet network
	I0721 16:23:49.554455    1915 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0721 16:23:49.554552    1915 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 16:23:49.554574    1915 cni.go:84] Creating CNI manager for ""
	I0721 16:23:49.554590    1915 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0721 16:23:49.554650    1915 start.go:340] cluster config:
	{Name:download-only-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:23:49.559722    1915 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 16:23:49.563784    1915 out.go:97] Downloading VM boot image ...
	I0721 16:23:49.563798    1915 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0721 16:23:54.963159    1915 out.go:97] Starting "download-only-504000" primary control-plane node in "download-only-504000" cluster
	I0721 16:23:54.963201    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:23:55.019895    1915 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0721 16:23:55.019919    1915 cache.go:56] Caching tarball of preloaded images
	I0721 16:23:55.020062    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:23:55.024201    1915 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0721 16:23:55.024208    1915 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:23:55.098900    1915 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0721 16:24:02.034198    1915 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:24:02.034379    1915 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:24:02.729850    1915 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0721 16:24:02.730047    1915 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/download-only-504000/config.json ...
	I0721 16:24:02.730078    1915 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/download-only-504000/config.json: {Name:mka7443ca39924a8a20a238c279262f6c536e549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 16:24:02.730310    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:24:02.730501    1915 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0721 16:24:03.220164    1915 out.go:169] 
	W0721 16:24:03.224300    1915 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60] Decompressors:map[bz2:0x14000702ba0 gz:0x14000702ba8 tar:0x14000702ac0 tar.bz2:0x14000702ad0 tar.gz:0x14000702b30 tar.xz:0x14000702b40 tar.zst:0x14000702b80 tbz2:0x14000702ad0 tgz:0x14000702b30 txz:0x14000702b40 tzst:0x14000702b80 xz:0x14000702be0 zip:0x14000702c10 zst:0x14000702be8] Getters:map[file:0x140014185a0 http:0x140007e01e0 https:0x140007e0230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0721 16:24:03.224326    1915 out_reason.go:110] 
	W0721 16:24:03.232271    1915 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 16:24:03.236199    1915 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-504000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-219000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-219000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.843264583s)

                                                
                                                
-- stdout --
	* [offline-docker-219000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-219000" primary control-plane node in "offline-docker-219000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-219000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:05:48.989523    5117 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:05:48.989654    5117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:05:48.989657    5117 out.go:304] Setting ErrFile to fd 2...
	I0721 17:05:48.989660    5117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:05:48.989802    5117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:05:48.990936    5117 out.go:298] Setting JSON to false
	I0721 17:05:49.008057    5117 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3911,"bootTime":1721602837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:05:49.008153    5117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:05:49.014144    5117 out.go:177] * [offline-docker-219000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:05:49.020115    5117 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:05:49.020136    5117 notify.go:220] Checking for updates...
	I0721 17:05:49.025073    5117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:05:49.028101    5117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:05:49.031100    5117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:05:49.034057    5117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:05:49.037084    5117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:05:49.040409    5117 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:05:49.040472    5117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:05:49.044097    5117 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:05:49.051125    5117 start.go:297] selected driver: qemu2
	I0721 17:05:49.051137    5117 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:05:49.051144    5117 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:05:49.052951    5117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:05:49.056047    5117 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:05:49.059165    5117 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:05:49.059202    5117 cni.go:84] Creating CNI manager for ""
	I0721 17:05:49.059209    5117 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:05:49.059213    5117 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:05:49.059260    5117 start.go:340] cluster config:
	{Name:offline-docker-219000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-219000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:05:49.062991    5117 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:49.068058    5117 out.go:177] * Starting "offline-docker-219000" primary control-plane node in "offline-docker-219000" cluster
	I0721 17:05:49.072072    5117 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:05:49.072118    5117 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:05:49.072132    5117 cache.go:56] Caching tarball of preloaded images
	I0721 17:05:49.072206    5117 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:05:49.072213    5117 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:05:49.072278    5117 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/offline-docker-219000/config.json ...
	I0721 17:05:49.072290    5117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/offline-docker-219000/config.json: {Name:mk5bed887575995e74ac3e5b1e41ae1aefaf6559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:05:49.072573    5117 start.go:360] acquireMachinesLock for offline-docker-219000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:05:49.072611    5117 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "offline-docker-219000"
	I0721 17:05:49.072622    5117 start.go:93] Provisioning new machine with config: &{Name:offline-docker-219000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-219000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:05:49.072656    5117 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:05:49.081095    5117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0721 17:05:49.096963    5117 start.go:159] libmachine.API.Create for "offline-docker-219000" (driver="qemu2")
	I0721 17:05:49.097003    5117 client.go:168] LocalClient.Create starting
	I0721 17:05:49.097093    5117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:05:49.097130    5117 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:49.097139    5117 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:49.097190    5117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:05:49.097213    5117 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:49.097219    5117 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:49.097584    5117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:05:49.238909    5117 main.go:141] libmachine: Creating SSH key...
	I0721 17:05:49.421484    5117 main.go:141] libmachine: Creating Disk image...
	I0721 17:05:49.421494    5117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:05:49.421687    5117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2
	I0721 17:05:49.431770    5117 main.go:141] libmachine: STDOUT: 
	I0721 17:05:49.431800    5117 main.go:141] libmachine: STDERR: 
	I0721 17:05:49.431854    5117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2 +20000M
	I0721 17:05:49.440966    5117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:05:49.440983    5117 main.go:141] libmachine: STDERR: 
	I0721 17:05:49.441007    5117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2
	I0721 17:05:49.441011    5117 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:05:49.441022    5117 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:05:49.441052    5117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:d6:db:5c:47:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2
	I0721 17:05:49.442753    5117 main.go:141] libmachine: STDOUT: 
	I0721 17:05:49.442767    5117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:05:49.442788    5117 client.go:171] duration metric: took 345.790583ms to LocalClient.Create
	I0721 17:05:51.444823    5117 start.go:128] duration metric: took 2.372225583s to createHost
	I0721 17:05:51.444839    5117 start.go:83] releasing machines lock for "offline-docker-219000", held for 2.372289167s
	W0721 17:05:51.444855    5117 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:05:51.456111    5117 out.go:177] * Deleting "offline-docker-219000" in qemu2 ...
	W0721 17:05:51.465264    5117 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:05:51.465278    5117 start.go:729] Will try again in 5 seconds ...
	I0721 17:05:56.467320    5117 start.go:360] acquireMachinesLock for offline-docker-219000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:05:56.467849    5117 start.go:364] duration metric: took 363.25µs to acquireMachinesLock for "offline-docker-219000"
	I0721 17:05:56.467988    5117 start.go:93] Provisioning new machine with config: &{Name:offline-docker-219000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-219000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:05:56.468333    5117 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:05:56.476794    5117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0721 17:05:56.527115    5117 start.go:159] libmachine.API.Create for "offline-docker-219000" (driver="qemu2")
	I0721 17:05:56.527176    5117 client.go:168] LocalClient.Create starting
	I0721 17:05:56.527288    5117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:05:56.527361    5117 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:56.527379    5117 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:56.527440    5117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:05:56.527494    5117 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:56.527506    5117 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:56.528133    5117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:05:56.678989    5117 main.go:141] libmachine: Creating SSH key...
	I0721 17:05:56.741036    5117 main.go:141] libmachine: Creating Disk image...
	I0721 17:05:56.741045    5117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:05:56.741217    5117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2
	I0721 17:05:56.750452    5117 main.go:141] libmachine: STDOUT: 
	I0721 17:05:56.750470    5117 main.go:141] libmachine: STDERR: 
	I0721 17:05:56.750516    5117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2 +20000M
	I0721 17:05:56.758292    5117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:05:56.758306    5117 main.go:141] libmachine: STDERR: 
	I0721 17:05:56.758316    5117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2
	I0721 17:05:56.758320    5117 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:05:56.758329    5117 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:05:56.758352    5117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:a7:f2:de:ae:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/offline-docker-219000/disk.qcow2
	I0721 17:05:56.759938    5117 main.go:141] libmachine: STDOUT: 
	I0721 17:05:56.759954    5117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:05:56.759966    5117 client.go:171] duration metric: took 232.790833ms to LocalClient.Create
	I0721 17:05:58.762090    5117 start.go:128] duration metric: took 2.293784375s to createHost
	I0721 17:05:58.762157    5117 start.go:83] releasing machines lock for "offline-docker-219000", held for 2.294346125s
	W0721 17:05:58.762580    5117 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-219000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-219000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:05:58.774229    5117 out.go:177] 
	W0721 17:05:58.778165    5117 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:05:58.778187    5117 out.go:239] * 
	* 
	W0721 17:05:58.780980    5117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:05:58.788205    5117 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-219000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-21 17:05:58.804056 -0700 PDT m=+2529.361123543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-219000 -n offline-docker-219000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-219000 -n offline-docker-219000: exit status 7 (68.155834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-219000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-219000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-219000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-668000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-668000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.84262975s)

                                                
                                                
-- stdout --
	* [cert-options-668000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-668000" primary control-plane node in "cert-options-668000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-668000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-668000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-668000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-668000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-668000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.980167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-668000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-668000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-668000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-668000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-668000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (37.5905ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-668000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-668000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-668000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-668000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-21 17:06:29.850394 -0700 PDT m=+2560.408322126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-668000 -n cert-options-668000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-668000 -n cert-options-668000: exit status 7 (29.07175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-668000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-668000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-668000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-578000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-578000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.869835333s)

                                                
                                                
-- stdout --
	* [cert-expiration-578000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-578000" primary control-plane node in "cert-expiration-578000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-578000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-578000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-578000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-578000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.207829666s)

                                                
                                                
-- stdout --
	* [cert-expiration-578000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-578000" primary control-plane node in "cert-expiration-578000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-578000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-578000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-578000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-578000" primary control-plane node in "cert-expiration-578000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-578000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-21 17:09:29.864806 -0700 PDT m=+2740.427675209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-578000 -n cert-expiration-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-578000 -n cert-expiration-578000: exit status 7 (65.353625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-578000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-578000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-578000
--- FAIL: TestCertExpiration (195.23s)

                                                
                                    
x
+
TestDockerFlags (10.17s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
E0721 17:06:12.376181    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.942195958s)

                                                
                                                
-- stdout --
	* [docker-flags-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-007000" primary control-plane node in "docker-flags-007000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-007000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:06:09.712376    5309 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:06:09.712496    5309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:06:09.712499    5309 out.go:304] Setting ErrFile to fd 2...
	I0721 17:06:09.712502    5309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:06:09.712647    5309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:06:09.713733    5309 out.go:298] Setting JSON to false
	I0721 17:06:09.730025    5309 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3932,"bootTime":1721602837,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:06:09.730110    5309 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:06:09.735264    5309 out.go:177] * [docker-flags-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:06:09.745075    5309 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:06:09.745111    5309 notify.go:220] Checking for updates...
	I0721 17:06:09.752990    5309 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:06:09.756061    5309 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:06:09.759011    5309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:06:09.762024    5309 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:06:09.765070    5309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:06:09.766726    5309 config.go:182] Loaded profile config "force-systemd-flag-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:06:09.766792    5309 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:06:09.766847    5309 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:06:09.770987    5309 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:06:09.775440    5309 start.go:297] selected driver: qemu2
	I0721 17:06:09.775448    5309 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:06:09.775456    5309 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:06:09.777913    5309 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:06:09.782202    5309 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:06:09.785130    5309 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0721 17:06:09.785160    5309 cni.go:84] Creating CNI manager for ""
	I0721 17:06:09.785167    5309 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:06:09.785170    5309 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:06:09.785205    5309 start.go:340] cluster config:
	{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:06:09.789029    5309 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:06:09.797062    5309 out.go:177] * Starting "docker-flags-007000" primary control-plane node in "docker-flags-007000" cluster
	I0721 17:06:09.801011    5309 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:06:09.801035    5309 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:06:09.801045    5309 cache.go:56] Caching tarball of preloaded images
	I0721 17:06:09.801100    5309 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:06:09.801106    5309 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:06:09.801166    5309 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/docker-flags-007000/config.json ...
	I0721 17:06:09.801179    5309 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/docker-flags-007000/config.json: {Name:mk4fc3e268bec2baeefbd8419fd9fda9c866617d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:06:09.801400    5309 start.go:360] acquireMachinesLock for docker-flags-007000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:06:09.801438    5309 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "docker-flags-007000"
	I0721 17:06:09.801449    5309 start.go:93] Provisioning new machine with config: &{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:06:09.801487    5309 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:06:09.808011    5309 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0721 17:06:09.825960    5309 start.go:159] libmachine.API.Create for "docker-flags-007000" (driver="qemu2")
	I0721 17:06:09.825988    5309 client.go:168] LocalClient.Create starting
	I0721 17:06:09.826058    5309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:06:09.826088    5309 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:09.826101    5309 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:09.826143    5309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:06:09.826168    5309 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:09.826175    5309 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:09.826536    5309 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:06:09.971894    5309 main.go:141] libmachine: Creating SSH key...
	I0721 17:06:10.048117    5309 main.go:141] libmachine: Creating Disk image...
	I0721 17:06:10.048123    5309 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:06:10.048313    5309 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2
	I0721 17:06:10.057505    5309 main.go:141] libmachine: STDOUT: 
	I0721 17:06:10.057519    5309 main.go:141] libmachine: STDERR: 
	I0721 17:06:10.057562    5309 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2 +20000M
	I0721 17:06:10.065354    5309 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:06:10.065368    5309 main.go:141] libmachine: STDERR: 
	I0721 17:06:10.065378    5309 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2
	I0721 17:06:10.065383    5309 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:06:10.065393    5309 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:06:10.065425    5309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:94:7e:74:a8:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2
	I0721 17:06:10.067052    5309 main.go:141] libmachine: STDOUT: 
	I0721 17:06:10.067075    5309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:06:10.067091    5309 client.go:171] duration metric: took 241.104542ms to LocalClient.Create
	I0721 17:06:12.069264    5309 start.go:128] duration metric: took 2.267805167s to createHost
	I0721 17:06:12.069338    5309 start.go:83] releasing machines lock for "docker-flags-007000", held for 2.26795425s
	W0721 17:06:12.069401    5309 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:12.075660    5309 out.go:177] * Deleting "docker-flags-007000" in qemu2 ...
	W0721 17:06:12.103511    5309 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:12.103542    5309 start.go:729] Will try again in 5 seconds ...
	I0721 17:06:17.105576    5309 start.go:360] acquireMachinesLock for docker-flags-007000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:06:17.136119    5309 start.go:364] duration metric: took 30.377542ms to acquireMachinesLock for "docker-flags-007000"
	I0721 17:06:17.136274    5309 start.go:93] Provisioning new machine with config: &{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:06:17.136524    5309 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:06:17.151152    5309 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0721 17:06:17.200989    5309 start.go:159] libmachine.API.Create for "docker-flags-007000" (driver="qemu2")
	I0721 17:06:17.201029    5309 client.go:168] LocalClient.Create starting
	I0721 17:06:17.201161    5309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:06:17.201232    5309 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:17.201249    5309 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:17.201342    5309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:06:17.201392    5309 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:17.201402    5309 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:17.201877    5309 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:06:17.352097    5309 main.go:141] libmachine: Creating SSH key...
	I0721 17:06:17.557134    5309 main.go:141] libmachine: Creating Disk image...
	I0721 17:06:17.557150    5309 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:06:17.557347    5309 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2
	I0721 17:06:17.566901    5309 main.go:141] libmachine: STDOUT: 
	I0721 17:06:17.566929    5309 main.go:141] libmachine: STDERR: 
	I0721 17:06:17.566984    5309 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2 +20000M
	I0721 17:06:17.574938    5309 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:06:17.574952    5309 main.go:141] libmachine: STDERR: 
	I0721 17:06:17.574961    5309 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2
	I0721 17:06:17.574977    5309 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:06:17.574986    5309 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:06:17.575013    5309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e3:82:fe:32:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/docker-flags-007000/disk.qcow2
	I0721 17:06:17.576634    5309 main.go:141] libmachine: STDOUT: 
	I0721 17:06:17.576648    5309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:06:17.576664    5309 client.go:171] duration metric: took 375.640334ms to LocalClient.Create
	I0721 17:06:19.578830    5309 start.go:128] duration metric: took 2.442335042s to createHost
	I0721 17:06:19.578894    5309 start.go:83] releasing machines lock for "docker-flags-007000", held for 2.442816458s
	W0721 17:06:19.579233    5309 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-007000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-007000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:19.594925    5309 out.go:177] 
	W0721 17:06:19.602103    5309 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:06:19.602244    5309 out.go:239] * 
	* 
	W0721 17:06:19.605006    5309 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:06:19.615028    5309 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.529459ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-007000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-007000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-007000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-007000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.558458ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-007000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-007000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-007000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-21 17:06:19.754672 -0700 PDT m=+2550.312320459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-007000 -n docker-flags-007000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-007000 -n docker-flags-007000: exit status 7 (28.428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-007000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-007000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-007000
--- FAIL: TestDockerFlags (10.17s)

                                                
                                    
x
+
TestForceSystemdFlag (10.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-208000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-208000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.859037042s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-208000" primary control-plane node in "force-systemd-flag-208000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-208000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:06:04.769308    5286 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:06:04.769433    5286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:06:04.769436    5286 out.go:304] Setting ErrFile to fd 2...
	I0721 17:06:04.769439    5286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:06:04.769568    5286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:06:04.770590    5286 out.go:298] Setting JSON to false
	I0721 17:06:04.786659    5286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3927,"bootTime":1721602837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:06:04.786726    5286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:06:04.792527    5286 out.go:177] * [force-systemd-flag-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:06:04.799537    5286 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:06:04.799584    5286 notify.go:220] Checking for updates...
	I0721 17:06:04.806448    5286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:06:04.809482    5286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:06:04.812472    5286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:06:04.813749    5286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:06:04.816461    5286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:06:04.819828    5286 config.go:182] Loaded profile config "force-systemd-env-181000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:06:04.819900    5286 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:06:04.819950    5286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:06:04.824303    5286 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:06:04.831522    5286 start.go:297] selected driver: qemu2
	I0721 17:06:04.831530    5286 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:06:04.831539    5286 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:06:04.833807    5286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:06:04.836472    5286 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:06:04.839523    5286 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 17:06:04.839536    5286 cni.go:84] Creating CNI manager for ""
	I0721 17:06:04.839542    5286 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:06:04.839547    5286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:06:04.839575    5286 start.go:340] cluster config:
	{Name:force-systemd-flag-208000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:06:04.843267    5286 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:06:04.849461    5286 out.go:177] * Starting "force-systemd-flag-208000" primary control-plane node in "force-systemd-flag-208000" cluster
	I0721 17:06:04.853488    5286 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:06:04.853509    5286 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:06:04.853521    5286 cache.go:56] Caching tarball of preloaded images
	I0721 17:06:04.853578    5286 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:06:04.853583    5286 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:06:04.853649    5286 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/force-systemd-flag-208000/config.json ...
	I0721 17:06:04.853662    5286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/force-systemd-flag-208000/config.json: {Name:mk9781af227e2c2dd5d4210fe79ad474acd9f3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:06:04.853995    5286 start.go:360] acquireMachinesLock for force-systemd-flag-208000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:06:04.854032    5286 start.go:364] duration metric: took 27µs to acquireMachinesLock for "force-systemd-flag-208000"
	I0721 17:06:04.854042    5286 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:06:04.854070    5286 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:06:04.861482    5286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0721 17:06:04.878806    5286 start.go:159] libmachine.API.Create for "force-systemd-flag-208000" (driver="qemu2")
	I0721 17:06:04.878838    5286 client.go:168] LocalClient.Create starting
	I0721 17:06:04.878905    5286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:06:04.878933    5286 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:04.878945    5286 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:04.878980    5286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:06:04.879003    5286 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:04.879010    5286 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:04.879388    5286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:06:05.020613    5286 main.go:141] libmachine: Creating SSH key...
	I0721 17:06:05.108707    5286 main.go:141] libmachine: Creating Disk image...
	I0721 17:06:05.108713    5286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:06:05.108881    5286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2
	I0721 17:06:05.118184    5286 main.go:141] libmachine: STDOUT: 
	I0721 17:06:05.118202    5286 main.go:141] libmachine: STDERR: 
	I0721 17:06:05.118259    5286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2 +20000M
	I0721 17:06:05.126135    5286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:06:05.126148    5286 main.go:141] libmachine: STDERR: 
	I0721 17:06:05.126161    5286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2
	I0721 17:06:05.126186    5286 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:06:05.126199    5286 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:06:05.126228    5286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:2a:f3:2d:01:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2
	I0721 17:06:05.127869    5286 main.go:141] libmachine: STDOUT: 
	I0721 17:06:05.127882    5286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:06:05.127898    5286 client.go:171] duration metric: took 249.062584ms to LocalClient.Create
	I0721 17:06:07.130043    5286 start.go:128] duration metric: took 2.276014417s to createHost
	I0721 17:06:07.130110    5286 start.go:83] releasing machines lock for "force-systemd-flag-208000", held for 2.276131125s
	W0721 17:06:07.130186    5286 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:07.153333    5286 out.go:177] * Deleting "force-systemd-flag-208000" in qemu2 ...
	W0721 17:06:07.172880    5286 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:07.172900    5286 start.go:729] Will try again in 5 seconds ...
	I0721 17:06:12.174900    5286 start.go:360] acquireMachinesLock for force-systemd-flag-208000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:06:12.175302    5286 start.go:364] duration metric: took 334.291µs to acquireMachinesLock for "force-systemd-flag-208000"
	I0721 17:06:12.175417    5286 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:06:12.175605    5286 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:06:12.182010    5286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0721 17:06:12.231193    5286 start.go:159] libmachine.API.Create for "force-systemd-flag-208000" (driver="qemu2")
	I0721 17:06:12.231248    5286 client.go:168] LocalClient.Create starting
	I0721 17:06:12.231384    5286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:06:12.231454    5286 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:12.231469    5286 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:12.231527    5286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:06:12.231574    5286 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:12.231589    5286 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:12.232450    5286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:06:12.392279    5286 main.go:141] libmachine: Creating SSH key...
	I0721 17:06:12.544162    5286 main.go:141] libmachine: Creating Disk image...
	I0721 17:06:12.544173    5286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:06:12.544374    5286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2
	I0721 17:06:12.553667    5286 main.go:141] libmachine: STDOUT: 
	I0721 17:06:12.553691    5286 main.go:141] libmachine: STDERR: 
	I0721 17:06:12.553752    5286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2 +20000M
	I0721 17:06:12.561763    5286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:06:12.561785    5286 main.go:141] libmachine: STDERR: 
	I0721 17:06:12.561803    5286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2
	I0721 17:06:12.561808    5286 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:06:12.561813    5286 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:06:12.561856    5286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f2:e2:1e:cc:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-flag-208000/disk.qcow2
	I0721 17:06:12.563458    5286 main.go:141] libmachine: STDOUT: 
	I0721 17:06:12.563474    5286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:06:12.563489    5286 client.go:171] duration metric: took 332.244208ms to LocalClient.Create
	I0721 17:06:14.565652    5286 start.go:128] duration metric: took 2.390071791s to createHost
	I0721 17:06:14.565732    5286 start.go:83] releasing machines lock for "force-systemd-flag-208000", held for 2.390470459s
	W0721 17:06:14.566294    5286 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:14.572960    5286 out.go:177] 
	W0721 17:06:14.577055    5286 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:06:14.577095    5286 out.go:239] * 
	* 
	W0721 17:06:14.579493    5286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:06:14.587055    5286 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-208000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-208000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-208000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.4205ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-208000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-208000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-21 17:06:14.679608 -0700 PDT m=+2545.237115793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-208000 -n force-systemd-flag-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-208000 -n force-systemd-flag-208000: exit status 7 (33.416667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-208000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-208000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-208000
--- FAIL: TestForceSystemdFlag (10.05s)

                                                
                                    
x
+
TestForceSystemdEnv (10.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-181000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-181000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.550378167s)

                                                
                                                
-- stdout --
	* [force-systemd-env-181000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-181000" primary control-plane node in "force-systemd-env-181000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-181000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:05:58.973475    5252 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:05:58.973598    5252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:05:58.973601    5252 out.go:304] Setting ErrFile to fd 2...
	I0721 17:05:58.973604    5252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:05:58.973730    5252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:05:58.974834    5252 out.go:298] Setting JSON to false
	I0721 17:05:58.991193    5252 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3921,"bootTime":1721602837,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:05:58.991268    5252 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:05:58.997373    5252 out.go:177] * [force-systemd-env-181000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:05:59.003373    5252 notify.go:220] Checking for updates...
	I0721 17:05:59.008319    5252 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:05:59.015288    5252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:05:59.022267    5252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:05:59.029279    5252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:05:59.037327    5252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:05:59.045288    5252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0721 17:05:59.049492    5252 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:05:59.049548    5252 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:05:59.053286    5252 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:05:59.059285    5252 start.go:297] selected driver: qemu2
	I0721 17:05:59.059291    5252 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:05:59.059297    5252 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:05:59.061444    5252 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:05:59.065279    5252 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:05:59.069391    5252 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 17:05:59.069405    5252 cni.go:84] Creating CNI manager for ""
	I0721 17:05:59.069412    5252 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:05:59.069416    5252 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:05:59.069452    5252 start.go:340] cluster config:
	{Name:force-systemd-env-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:05:59.072972    5252 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:59.080297    5252 out.go:177] * Starting "force-systemd-env-181000" primary control-plane node in "force-systemd-env-181000" cluster
	I0721 17:05:59.084152    5252 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:05:59.084166    5252 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:05:59.084173    5252 cache.go:56] Caching tarball of preloaded images
	I0721 17:05:59.084222    5252 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:05:59.084228    5252 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:05:59.084285    5252 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/force-systemd-env-181000/config.json ...
	I0721 17:05:59.084297    5252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/force-systemd-env-181000/config.json: {Name:mk48cc669a44020ff431d51e94a39b271d1ef91a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:05:59.084488    5252 start.go:360] acquireMachinesLock for force-systemd-env-181000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:05:59.084526    5252 start.go:364] duration metric: took 31.041µs to acquireMachinesLock for "force-systemd-env-181000"
	I0721 17:05:59.084537    5252 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:05:59.084559    5252 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:05:59.092328    5252 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0721 17:05:59.108776    5252 start.go:159] libmachine.API.Create for "force-systemd-env-181000" (driver="qemu2")
	I0721 17:05:59.108809    5252 client.go:168] LocalClient.Create starting
	I0721 17:05:59.108869    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:05:59.108897    5252 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:59.108908    5252 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:59.108943    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:05:59.108966    5252 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:59.108978    5252 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:59.109331    5252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:05:59.252686    5252 main.go:141] libmachine: Creating SSH key...
	I0721 17:05:59.301413    5252 main.go:141] libmachine: Creating Disk image...
	I0721 17:05:59.301418    5252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:05:59.301583    5252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2
	I0721 17:05:59.311392    5252 main.go:141] libmachine: STDOUT: 
	I0721 17:05:59.311409    5252 main.go:141] libmachine: STDERR: 
	I0721 17:05:59.311473    5252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2 +20000M
	I0721 17:05:59.319664    5252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:05:59.319678    5252 main.go:141] libmachine: STDERR: 
	I0721 17:05:59.319693    5252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2
	I0721 17:05:59.319697    5252 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:05:59.319715    5252 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:05:59.319744    5252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d5:25:71:a2:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2
	I0721 17:05:59.321381    5252 main.go:141] libmachine: STDOUT: 
	I0721 17:05:59.321397    5252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:05:59.321417    5252 client.go:171] duration metric: took 212.611208ms to LocalClient.Create
	I0721 17:06:01.323458    5252 start.go:128] duration metric: took 2.238937041s to createHost
	I0721 17:06:01.323477    5252 start.go:83] releasing machines lock for "force-systemd-env-181000", held for 2.239007542s
	W0721 17:06:01.323494    5252 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:01.332117    5252 out.go:177] * Deleting "force-systemd-env-181000" in qemu2 ...
	W0721 17:06:01.340494    5252 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:01.340508    5252 start.go:729] Will try again in 5 seconds ...
	I0721 17:06:06.342632    5252 start.go:360] acquireMachinesLock for force-systemd-env-181000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:06:07.130274    5252 start.go:364] duration metric: took 787.515041ms to acquireMachinesLock for "force-systemd-env-181000"
	I0721 17:06:07.130406    5252 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-181000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-181000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:06:07.130706    5252 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:06:07.144286    5252 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0721 17:06:07.193745    5252 start.go:159] libmachine.API.Create for "force-systemd-env-181000" (driver="qemu2")
	I0721 17:06:07.193803    5252 client.go:168] LocalClient.Create starting
	I0721 17:06:07.193956    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:06:07.194015    5252 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:07.194034    5252 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:07.194100    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:06:07.194146    5252 main.go:141] libmachine: Decoding PEM data...
	I0721 17:06:07.194160    5252 main.go:141] libmachine: Parsing certificate...
	I0721 17:06:07.194802    5252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:06:07.346166    5252 main.go:141] libmachine: Creating SSH key...
	I0721 17:06:07.433229    5252 main.go:141] libmachine: Creating Disk image...
	I0721 17:06:07.433238    5252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:06:07.433411    5252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2
	I0721 17:06:07.442762    5252 main.go:141] libmachine: STDOUT: 
	I0721 17:06:07.442783    5252 main.go:141] libmachine: STDERR: 
	I0721 17:06:07.442827    5252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2 +20000M
	I0721 17:06:07.450651    5252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:06:07.450667    5252 main.go:141] libmachine: STDERR: 
	I0721 17:06:07.450680    5252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2
	I0721 17:06:07.450683    5252 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:06:07.450693    5252 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:06:07.450727    5252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:cf:ed:c0:0c:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/force-systemd-env-181000/disk.qcow2
	I0721 17:06:07.452440    5252 main.go:141] libmachine: STDOUT: 
	I0721 17:06:07.452455    5252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:06:07.452467    5252 client.go:171] duration metric: took 258.665208ms to LocalClient.Create
	I0721 17:06:09.452847    5252 start.go:128] duration metric: took 2.322147083s to createHost
	I0721 17:06:09.452917    5252 start.go:83] releasing machines lock for "force-systemd-env-181000", held for 2.322660125s
	W0721 17:06:09.453316    5252 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-181000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:06:09.465783    5252 out.go:177] 
	W0721 17:06:09.470966    5252 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:06:09.471007    5252 out.go:239] * 
	* 
	W0721 17:06:09.473666    5252 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:06:09.483646    5252 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-181000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-181000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-181000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.7875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-181000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-181000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-181000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-21 17:06:09.575357 -0700 PDT m=+2540.132723043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-181000 -n force-systemd-env-181000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-181000 -n force-systemd-env-181000: exit status 7 (35.307917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-181000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-181000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-181000
--- FAIL: TestForceSystemdEnv (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-044000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-044000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-vccg2" [915fc0bf-b6e9-44c0-b3ba-1a716df06a66] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-vccg2" [915fc0bf-b6e9-44c0-b3ba-1a716df06a66] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004990584s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31652
functional_test.go:1657: error fetching http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
E0721 16:35:53.219893    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
functional_test.go:1657: error fetching http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31652: Get "http://192.168.105.4:31652": dial tcp 192.168.105.4:31652: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-044000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-vccg2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-044000/192.168.105.4
Start Time:       Sun, 21 Jul 2024 16:35:36 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://358e8146afb386e34b1a2ed15ec7ca23f00d05e0116fbaca50a074059060fc52
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sun, 21 Jul 2024 16:35:51 -0700
Finished:     Sun, 21 Jul 2024 16:35:51 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jccmr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jccmr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  34s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-vccg2 to functional-044000
Normal   Pulled     19s (x3 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    19s (x3 over 34s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 34s)  kubelet            Started container echoserver-arm
Warning  BackOff    3s (x3 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-vccg2_default(915fc0bf-b6e9-44c0-b3ba-1a716df06a66)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-044000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-044000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.120.97
IPs:                      10.106.120.97
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31652/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-044000 -n functional-044000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:35 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2064888309/001:/mount-9p     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:35 PDT | 21 Jul 24 16:35 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh -- ls                                                                                         | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:35 PDT | 21 Jul 24 16:35 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh cat                                                                                           | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:35 PDT | 21 Jul 24 16:35 PDT |
	|           | /mount-9p/test-1721604958591528000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh stat                                                                                          | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT | 21 Jul 24 16:36 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh stat                                                                                          | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT | 21 Jul 24 16:36 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh sudo                                                                                          | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT | 21 Jul 24 16:36 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port672247181/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT | 21 Jul 24 16:36 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh -- ls                                                                                         | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT | 21 Jul 24 16:36 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh sudo                                                                                          | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT | 21 Jul 24 16:36 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT | 21 Jul 24 16:36 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT | 21 Jul 24 16:36 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-044000 --dry-run                                                                                      | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-044000 | jenkins | v1.33.1 | 21 Jul 24 16:36 PDT |                     |
	|           | -p functional-044000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 16:36:10
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 16:36:10.156331    2989 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:36:10.156435    2989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:10.156438    2989 out.go:304] Setting ErrFile to fd 2...
	I0721 16:36:10.156440    2989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:10.156575    2989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:36:10.157999    2989 out.go:298] Setting JSON to false
	I0721 16:36:10.174830    2989 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2133,"bootTime":1721602837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 16:36:10.174918    2989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:36:10.178120    2989 out.go:177] * [functional-044000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0721 16:36:10.185066    2989 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 16:36:10.185101    2989 notify.go:220] Checking for updates...
	I0721 16:36:10.192108    2989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 16:36:10.195061    2989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 16:36:10.198040    2989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:36:10.200947    2989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 16:36:10.204043    2989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 16:36:10.207338    2989 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:36:10.207586    2989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:36:10.212002    2989 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0721 16:36:10.219006    2989 start.go:297] selected driver: qemu2
	I0721 16:36:10.219013    2989 start.go:901] validating driver "qemu2" against &{Name:functional-044000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:36:10.219055    2989 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 16:36:10.225019    2989 out.go:177] 
	W0721 16:36:10.229017    2989 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0721 16:36:10.232922    2989 out.go:177] 
	
	
	==> Docker <==
	Jul 21 23:35:59 functional-044000 dockerd[5930]: time="2024-07-21T23:35:59.814159716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:35:59 functional-044000 dockerd[5930]: time="2024-07-21T23:35:59.814168091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:35:59 functional-044000 dockerd[5930]: time="2024-07-21T23:35:59.814199341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:35:59 functional-044000 cri-dockerd[6247]: time="2024-07-21T23:35:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/57b9f99a16fc03e9acb9464fb1cf60d7517c709c49ff4ccb67b16d51f671af0c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 21 23:36:05 functional-044000 cri-dockerd[6247]: time="2024-07-21T23:36:05Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jul 21 23:36:05 functional-044000 dockerd[5930]: time="2024-07-21T23:36:05.555320085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:36:05 functional-044000 dockerd[5930]: time="2024-07-21T23:36:05.555521000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:36:05 functional-044000 dockerd[5930]: time="2024-07-21T23:36:05.555527083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:36:05 functional-044000 dockerd[5930]: time="2024-07-21T23:36:05.555553000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:36:05 functional-044000 dockerd[5923]: time="2024-07-21T23:36:05.608010584Z" level=info msg="ignoring event" container=6d40f2b0eefc839af0ea4a51e4dc00950146609ebd0a372db5a18542c69317b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:36:05 functional-044000 dockerd[5930]: time="2024-07-21T23:36:05.608235624Z" level=info msg="shim disconnected" id=6d40f2b0eefc839af0ea4a51e4dc00950146609ebd0a372db5a18542c69317b8 namespace=moby
	Jul 21 23:36:05 functional-044000 dockerd[5930]: time="2024-07-21T23:36:05.608316456Z" level=warning msg="cleaning up after shim disconnected" id=6d40f2b0eefc839af0ea4a51e4dc00950146609ebd0a372db5a18542c69317b8 namespace=moby
	Jul 21 23:36:05 functional-044000 dockerd[5930]: time="2024-07-21T23:36:05.608326248Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:36:06 functional-044000 dockerd[5930]: time="2024-07-21T23:36:06.683368761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:36:06 functional-044000 dockerd[5930]: time="2024-07-21T23:36:06.683402885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:36:06 functional-044000 dockerd[5930]: time="2024-07-21T23:36:06.683408635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:36:06 functional-044000 dockerd[5930]: time="2024-07-21T23:36:06.683603509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:36:06 functional-044000 dockerd[5930]: time="2024-07-21T23:36:06.720368147Z" level=info msg="shim disconnected" id=b039fc0694b39b30e3cba0ce65d76e45ada7983f2aeef05a07e82cd4cd752880 namespace=moby
	Jul 21 23:36:06 functional-044000 dockerd[5930]: time="2024-07-21T23:36:06.720401938Z" level=warning msg="cleaning up after shim disconnected" id=b039fc0694b39b30e3cba0ce65d76e45ada7983f2aeef05a07e82cd4cd752880 namespace=moby
	Jul 21 23:36:06 functional-044000 dockerd[5930]: time="2024-07-21T23:36:06.720406355Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:36:06 functional-044000 dockerd[5923]: time="2024-07-21T23:36:06.720490563Z" level=info msg="ignoring event" container=b039fc0694b39b30e3cba0ce65d76e45ada7983f2aeef05a07e82cd4cd752880 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:36:07 functional-044000 dockerd[5923]: time="2024-07-21T23:36:07.195650098Z" level=info msg="ignoring event" container=57b9f99a16fc03e9acb9464fb1cf60d7517c709c49ff4ccb67b16d51f671af0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:36:07 functional-044000 dockerd[5930]: time="2024-07-21T23:36:07.195824054Z" level=info msg="shim disconnected" id=57b9f99a16fc03e9acb9464fb1cf60d7517c709c49ff4ccb67b16d51f671af0c namespace=moby
	Jul 21 23:36:07 functional-044000 dockerd[5930]: time="2024-07-21T23:36:07.195858929Z" level=warning msg="cleaning up after shim disconnected" id=57b9f99a16fc03e9acb9464fb1cf60d7517c709c49ff4ccb67b16d51f671af0c namespace=moby
	Jul 21 23:36:07 functional-044000 dockerd[5930]: time="2024-07-21T23:36:07.195863637Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b039fc0694b39       72565bf5bbedf                                                                                         5 seconds ago        Exited              echoserver-arm            3                   4c896f2f00a06       hello-node-65f5d5cc78-gnr4k
	6d40f2b0eefc8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 seconds ago        Exited              mount-munger              0                   57b9f99a16fc0       busybox-mount
	358e8146afb38       72565bf5bbedf                                                                                         20 seconds ago       Exited              echoserver-arm            2                   1db447a061870       hello-node-connect-6f49f58cd5-vccg2
	a96e10f515f69       nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df                         20 seconds ago       Running             myfrontend                0                   0e19c6b100750       sp-pod
	52387188376e9       nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         42 seconds ago       Running             nginx                     0                   af0a97c6505ac       nginx-svc
	db164f7f25ee4       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   5963b22ca4230       coredns-7db6d8ff4d-gfqff
	fd76c0aaccebf       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   9def82ae5880a       storage-provisioner
	e7cbf90adb7e3       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   50f63b7ec8b86       kube-proxy-vhqxv
	0083f0613d7ae       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   2d8de20b17715       etcd-functional-044000
	705f6f4e89327       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   75d1369b6bfbd       kube-controller-manager-functional-044000
	dfcd2e9f87e2f       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   8601b1aeed792       kube-scheduler-functional-044000
	f7315a561f28c       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   3db29398bb597       kube-apiserver-functional-044000
	a8edc3cf1cfa9       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   035e757683db7       coredns-7db6d8ff4d-gfqff
	8b72b665292b8       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   b4268504404bd       storage-provisioner
	9a9d4ebeae6c8       2351f570ed0ea                                                                                         2 minutes ago        Exited              kube-proxy                1                   f6d18d0da6ef8       kube-proxy-vhqxv
	3b2b19dbfab25       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   f73c9ef8ad64c       kube-controller-manager-functional-044000
	1c05515596fcd       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   1342ed0df3642       kube-scheduler-functional-044000
	b9f4f0b0730e5       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   c546598180066       etcd-functional-044000
	
	
	==> coredns [a8edc3cf1cfa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51173 - 53505 "HINFO IN 599917650156730145.6861379376692621713. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.117777187s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [db164f7f25ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47056 - 26259 "HINFO IN 5830594097355095485.7764544244215891225. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022349542s
	[INFO] 10.244.0.1:16718 - 57452 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000085874s
	[INFO] 10.244.0.1:40122 - 39125 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000095167s
	[INFO] 10.244.0.1:9909 - 20640 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000899747s
	[INFO] 10.244.0.1:21369 - 13491 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000029375s
	[INFO] 10.244.0.1:18852 - 50196 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000065458s
	[INFO] 10.244.0.1:61157 - 35684 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000243207s
	
	
	==> describe nodes <==
	Name:               functional-044000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-044000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=functional-044000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T16_32_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:32:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-044000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:36:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:35:54 +0000   Sun, 21 Jul 2024 23:32:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:35:54 +0000   Sun, 21 Jul 2024 23:32:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:35:54 +0000   Sun, 21 Jul 2024 23:32:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:35:54 +0000   Sun, 21 Jul 2024 23:32:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-044000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8dff72f72ff48d2b4a25bd78110abf0
	  System UUID:                c8dff72f72ff48d2b4a25bd78110abf0
	  Boot ID:                    32ae0355-9598-486e-aac8-6aa4a2f761b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-gnr4k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  default                     hello-node-connect-6f49f58cd5-vccg2          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 coredns-7db6d8ff4d-gfqff                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m8s
	  kube-system                 etcd-functional-044000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m22s
	  kube-system                 kube-apiserver-functional-044000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-functional-044000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 kube-proxy-vhqxv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kube-scheduler-functional-044000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-nmbgk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-zfp7s        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m7s                   kube-proxy       
	  Normal  Starting                 76s                    kube-proxy       
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m27s                  kubelet          Node functional-044000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m22s                  kubelet          Node functional-044000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s                  kubelet          Node functional-044000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s                  kubelet          Node functional-044000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m22s                  kubelet          Starting kubelet.
	  Normal  NodeReady                3m19s                  kubelet          Node functional-044000 status is now: NodeReady
	  Normal  RegisteredNode           3m10s                  node-controller  Node functional-044000 event: Registered Node functional-044000 in Controller
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node functional-044000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node functional-044000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node functional-044000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           115s                   node-controller  Node functional-044000 event: Registered Node functional-044000 in Controller
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)      kubelet          Node functional-044000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)      kubelet          Node functional-044000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)      kubelet          Node functional-044000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                    node-controller  Node functional-044000 event: Registered Node functional-044000 in Controller
	
	
	==> dmesg <==
	[ +12.389479] kauditd_printk_skb: 33 callbacks suppressed
	[  +0.163457] systemd-fstab-generator[5003]: Ignoring "noauto" option for root device
	[ +19.019170] systemd-fstab-generator[5453]: Ignoring "noauto" option for root device
	[  +0.057815] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.097502] systemd-fstab-generator[5488]: Ignoring "noauto" option for root device
	[  +0.093311] systemd-fstab-generator[5500]: Ignoring "noauto" option for root device
	[  +0.093529] systemd-fstab-generator[5514]: Ignoring "noauto" option for root device
	[  +5.100448] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.393584] systemd-fstab-generator[6131]: Ignoring "noauto" option for root device
	[  +0.079147] systemd-fstab-generator[6143]: Ignoring "noauto" option for root device
	[  +0.071686] systemd-fstab-generator[6155]: Ignoring "noauto" option for root device
	[  +0.085153] systemd-fstab-generator[6212]: Ignoring "noauto" option for root device
	[  +0.200508] systemd-fstab-generator[6399]: Ignoring "noauto" option for root device
	[  +1.051393] systemd-fstab-generator[6525]: Ignoring "noauto" option for root device
	[  +1.230342] kauditd_printk_skb: 189 callbacks suppressed
	[Jul21 23:35] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.108782] systemd-fstab-generator[7537]: Ignoring "noauto" option for root device
	[  +4.561885] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.176435] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.493346] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.777829] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.392056] kauditd_printk_skb: 38 callbacks suppressed
	[ +16.931137] kauditd_printk_skb: 21 callbacks suppressed
	[Jul21 23:36] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.453575] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [0083f0613d7a] <==
	{"level":"info","ts":"2024-07-21T23:34:51.020956Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-21T23:34:51.020978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-21T23:34:51.021076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-21T23:34:51.021131Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-21T23:34:51.021187Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-21T23:34:51.021216Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-21T23:34:51.023684Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-21T23:34:51.023758Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-21T23:34:51.024735Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-21T23:34:51.025532Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-21T23:34:51.025565Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-21T23:34:52.490071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-21T23:34:52.490229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-21T23:34:52.490303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-21T23:34:52.490337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-21T23:34:52.490353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-21T23:34:52.490385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-21T23:34:52.490467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-21T23:34:52.495121Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-044000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-21T23:34:52.495216Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-21T23:34:52.495829Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-21T23:34:52.496172Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-21T23:34:52.496389Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-21T23:34:52.499783Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-21T23:34:52.499793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b9f4f0b0730e] <==
	{"level":"info","ts":"2024-07-21T23:34:01.000728Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-21T23:34:02.673541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-21T23:34:02.673691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-21T23:34:02.673757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-21T23:34:02.673794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-21T23:34:02.673817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-21T23:34:02.673841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-21T23:34:02.673872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-21T23:34:02.676685Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-044000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-21T23:34:02.676697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-21T23:34:02.677263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-21T23:34:02.677662Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-21T23:34:02.677768Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-21T23:34:02.681562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-21T23:34:02.681562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-21T23:34:35.654887Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-21T23:34:35.65492Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-044000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-21T23:34:35.654975Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-21T23:34:35.655022Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-21T23:34:35.667591Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-21T23:34:35.667615Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-21T23:34:35.667653Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-21T23:34:35.669749Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-21T23:34:35.669795Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-21T23:34:35.6698Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-044000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 23:36:11 up 3 min,  0 users,  load average: 1.21, 0.67, 0.27
	Linux functional-044000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f7315a561f28] <==
	I0721 23:34:53.156750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0721 23:34:53.156761       1 aggregator.go:165] initial CRD sync complete...
	I0721 23:34:53.156764       1 autoregister_controller.go:141] Starting autoregister controller
	I0721 23:34:53.156767       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0721 23:34:53.156769       1 cache.go:39] Caches are synced for autoregister controller
	I0721 23:34:53.160976       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0721 23:34:53.161003       1 policy_source.go:224] refreshing policies
	I0721 23:34:53.161020       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0721 23:34:53.176913       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0721 23:34:54.030096       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0721 23:34:54.239147       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0721 23:34:54.242996       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0721 23:34:54.253473       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0721 23:34:54.261386       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0721 23:34:54.263354       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0721 23:35:05.380129       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0721 23:35:05.395754       1 controller.go:615] quota admission added evaluator for: endpoints
	I0721 23:35:14.052120       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.9.77"}
	I0721 23:35:19.185333       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0721 23:35:19.227508       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.162.241"}
	I0721 23:35:23.101153       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.207.63"}
	I0721 23:35:36.498272       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.120.97"}
	I0721 23:36:10.858466       1 controller.go:615] quota admission added evaluator for: namespaces
	I0721 23:36:11.015583       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.101.36"}
	I0721 23:36:11.033205       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.123.117"}
	
	
	==> kube-controller-manager [3b2b19dbfab2] <==
	I0721 23:34:16.057234       1 shared_informer.go:320] Caches are synced for daemon sets
	I0721 23:34:16.069427       1 shared_informer.go:320] Caches are synced for disruption
	I0721 23:34:16.069462       1 shared_informer.go:320] Caches are synced for job
	I0721 23:34:16.069436       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0721 23:34:16.069506       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0721 23:34:16.069464       1 shared_informer.go:320] Caches are synced for HPA
	I0721 23:34:16.070246       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0721 23:34:16.070273       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0721 23:34:16.070309       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0721 23:34:16.076832       1 shared_informer.go:320] Caches are synced for node
	I0721 23:34:16.076860       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0721 23:34:16.076875       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0721 23:34:16.076886       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0721 23:34:16.076892       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0721 23:34:16.082181       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0721 23:34:16.177721       1 shared_informer.go:320] Caches are synced for attach detach
	I0721 23:34:16.220151       1 shared_informer.go:320] Caches are synced for taint
	I0721 23:34:16.220198       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0721 23:34:16.220233       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-044000"
	I0721 23:34:16.220253       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0721 23:34:16.250643       1 shared_informer.go:320] Caches are synced for resource quota
	I0721 23:34:16.285760       1 shared_informer.go:320] Caches are synced for resource quota
	I0721 23:34:16.694593       1 shared_informer.go:320] Caches are synced for garbage collector
	I0721 23:34:16.781707       1 shared_informer.go:320] Caches are synced for garbage collector
	I0721 23:34:16.781718       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [705f6f4e8932] <==
	I0721 23:35:39.992149       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24.083µs"
	I0721 23:35:51.666063       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.042µs"
	I0721 23:35:52.068726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.25µs"
	I0721 23:35:54.665456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="27.834µs"
	I0721 23:36:07.147069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="25.666µs"
	I0721 23:36:07.666830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.375µs"
	I0721 23:36:10.920642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="14.181328ms"
	E0721 23:36:10.920685       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0721 23:36:10.923320       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="19.248979ms"
	E0721 23:36:10.923342       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0721 23:36:10.929173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.471181ms"
	E0721 23:36:10.929195       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0721 23:36:10.931643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="2.435992ms"
	E0721 23:36:10.931658       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0721 23:36:10.931792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.438847ms"
	E0721 23:36:10.931801       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0721 23:36:10.942823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.928301ms"
	E0721 23:36:10.942838       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0721 23:36:10.959643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.002444ms"
	I0721 23:36:10.964156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="19.955059ms"
	I0721 23:36:10.975203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="15.538366ms"
	I0721 23:36:11.002591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="38.411623ms"
	I0721 23:36:11.002621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.458µs"
	I0721 23:36:11.012283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="37.061961ms"
	I0721 23:36:11.012354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="52.625µs"
	
	
	==> kube-proxy [9a9d4ebeae6c] <==
	I0721 23:34:03.852584       1 server_linux.go:69] "Using iptables proxy"
	I0721 23:34:03.858322       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0721 23:34:03.883178       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:34:03.883194       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:34:03.883203       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:34:03.883847       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:34:03.883937       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:34:03.883942       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:34:03.884513       1 config.go:192] "Starting service config controller"
	I0721 23:34:03.884517       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:34:03.884527       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:34:03.884529       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:34:03.884654       1 config.go:319] "Starting node config controller"
	I0721 23:34:03.884656       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:34:03.985100       1 shared_informer.go:320] Caches are synced for service config
	I0721 23:34:03.985100       1 shared_informer.go:320] Caches are synced for node config
	I0721 23:34:03.985114       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e7cbf90adb7e] <==
	I0721 23:34:54.191792       1 server_linux.go:69] "Using iptables proxy"
	I0721 23:34:54.197069       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0721 23:34:54.209013       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:34:54.209032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:34:54.209041       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:34:54.211252       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:34:54.211369       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:34:54.211374       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:34:54.211753       1 config.go:192] "Starting service config controller"
	I0721 23:34:54.211763       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:34:54.211772       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:34:54.211781       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:34:54.212092       1 config.go:319] "Starting node config controller"
	I0721 23:34:54.212100       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:34:54.312760       1 shared_informer.go:320] Caches are synced for node config
	I0721 23:34:54.312770       1 shared_informer.go:320] Caches are synced for service config
	I0721 23:34:54.312781       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1c05515596fc] <==
	I0721 23:34:01.585343       1 serving.go:380] Generated self-signed cert in-memory
	W0721 23:34:03.218862       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0721 23:34:03.220914       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0721 23:34:03.220956       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0721 23:34:03.220973       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0721 23:34:03.252709       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0721 23:34:03.252844       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:34:03.253611       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0721 23:34:03.253648       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0721 23:34:03.253664       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0721 23:34:03.253657       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0721 23:34:03.354895       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0721 23:34:35.647135       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dfcd2e9f87e2] <==
	I0721 23:34:50.869859       1 serving.go:380] Generated self-signed cert in-memory
	W0721 23:34:53.066550       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0721 23:34:53.066620       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0721 23:34:53.066666       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0721 23:34:53.066692       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0721 23:34:53.085033       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0721 23:34:53.085051       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:34:53.085783       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0721 23:34:53.085869       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0721 23:34:53.085878       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0721 23:34:53.085885       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0721 23:34:53.186103       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 21 23:35:54 functional-044000 kubelet[6532]: E0721 23:35:54.661225    6532 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-gnr4k_default(8cc249af-62a6-47b2-818d-cd9d2aeb39ea)\"" pod="default/hello-node-65f5d5cc78-gnr4k" podUID="8cc249af-62a6-47b2-818d-cd9d2aeb39ea"
	Jul 21 23:35:59 functional-044000 kubelet[6532]: I0721 23:35:59.468113    6532 topology_manager.go:215] "Topology Admit Handler" podUID="f84d2293-2028-40ed-8c99-aeee4d18f14e" podNamespace="default" podName="busybox-mount"
	Jul 21 23:35:59 functional-044000 kubelet[6532]: I0721 23:35:59.585163    6532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f84d2293-2028-40ed-8c99-aeee4d18f14e-test-volume\") pod \"busybox-mount\" (UID: \"f84d2293-2028-40ed-8c99-aeee4d18f14e\") " pod="default/busybox-mount"
	Jul 21 23:35:59 functional-044000 kubelet[6532]: I0721 23:35:59.585187    6532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9496d\" (UniqueName: \"kubernetes.io/projected/f84d2293-2028-40ed-8c99-aeee4d18f14e-kube-api-access-9496d\") pod \"busybox-mount\" (UID: \"f84d2293-2028-40ed-8c99-aeee4d18f14e\") " pod="default/busybox-mount"
	Jul 21 23:36:06 functional-044000 kubelet[6532]: I0721 23:36:06.660735    6532 scope.go:117] "RemoveContainer" containerID="69c7aadb238d5f5efa5f642f66e4d8ad59b5de986fcbebbbe65fa11b4bad9fbc"
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.142001    6532 scope.go:117] "RemoveContainer" containerID="69c7aadb238d5f5efa5f642f66e4d8ad59b5de986fcbebbbe65fa11b4bad9fbc"
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.142129    6532 scope.go:117] "RemoveContainer" containerID="b039fc0694b39b30e3cba0ce65d76e45ada7983f2aeef05a07e82cd4cd752880"
	Jul 21 23:36:07 functional-044000 kubelet[6532]: E0721 23:36:07.142211    6532 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-gnr4k_default(8cc249af-62a6-47b2-818d-cd9d2aeb39ea)\"" pod="default/hello-node-65f5d5cc78-gnr4k" podUID="8cc249af-62a6-47b2-818d-cd9d2aeb39ea"
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.336604    6532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f84d2293-2028-40ed-8c99-aeee4d18f14e-test-volume\") pod \"f84d2293-2028-40ed-8c99-aeee4d18f14e\" (UID: \"f84d2293-2028-40ed-8c99-aeee4d18f14e\") "
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.336630    6532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9496d\" (UniqueName: \"kubernetes.io/projected/f84d2293-2028-40ed-8c99-aeee4d18f14e-kube-api-access-9496d\") pod \"f84d2293-2028-40ed-8c99-aeee4d18f14e\" (UID: \"f84d2293-2028-40ed-8c99-aeee4d18f14e\") "
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.336723    6532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f84d2293-2028-40ed-8c99-aeee4d18f14e-test-volume" (OuterVolumeSpecName: "test-volume") pod "f84d2293-2028-40ed-8c99-aeee4d18f14e" (UID: "f84d2293-2028-40ed-8c99-aeee4d18f14e"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.337177    6532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84d2293-2028-40ed-8c99-aeee4d18f14e-kube-api-access-9496d" (OuterVolumeSpecName: "kube-api-access-9496d") pod "f84d2293-2028-40ed-8c99-aeee4d18f14e" (UID: "f84d2293-2028-40ed-8c99-aeee4d18f14e"). InnerVolumeSpecName "kube-api-access-9496d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.437168    6532 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f84d2293-2028-40ed-8c99-aeee4d18f14e-test-volume\") on node \"functional-044000\" DevicePath \"\""
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.437180    6532 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9496d\" (UniqueName: \"kubernetes.io/projected/f84d2293-2028-40ed-8c99-aeee4d18f14e-kube-api-access-9496d\") on node \"functional-044000\" DevicePath \"\""
	Jul 21 23:36:07 functional-044000 kubelet[6532]: I0721 23:36:07.661129    6532 scope.go:117] "RemoveContainer" containerID="358e8146afb386e34b1a2ed15ec7ca23f00d05e0116fbaca50a074059060fc52"
	Jul 21 23:36:07 functional-044000 kubelet[6532]: E0721 23:36:07.661221    6532 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-vccg2_default(915fc0bf-b6e9-44c0-b3ba-1a716df06a66)\"" pod="default/hello-node-connect-6f49f58cd5-vccg2" podUID="915fc0bf-b6e9-44c0-b3ba-1a716df06a66"
	Jul 21 23:36:08 functional-044000 kubelet[6532]: I0721 23:36:08.148136    6532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57b9f99a16fc03e9acb9464fb1cf60d7517c709c49ff4ccb67b16d51f671af0c"
	Jul 21 23:36:10 functional-044000 kubelet[6532]: I0721 23:36:10.961468    6532 topology_manager.go:215] "Topology Admit Handler" podUID="47c9b7b4-16af-4278-9c57-7ba9b8c93556" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-zfp7s"
	Jul 21 23:36:10 functional-044000 kubelet[6532]: E0721 23:36:10.961507    6532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f84d2293-2028-40ed-8c99-aeee4d18f14e" containerName="mount-munger"
	Jul 21 23:36:10 functional-044000 kubelet[6532]: I0721 23:36:10.961523    6532 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84d2293-2028-40ed-8c99-aeee4d18f14e" containerName="mount-munger"
	Jul 21 23:36:10 functional-044000 kubelet[6532]: I0721 23:36:10.962342    6532 topology_manager.go:215] "Topology Admit Handler" podUID="e8b04a49-dfe5-458b-a2b5-0509870d74eb" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-nmbgk"
	Jul 21 23:36:11 functional-044000 kubelet[6532]: I0721 23:36:11.062712    6532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9mnr\" (UniqueName: \"kubernetes.io/projected/47c9b7b4-16af-4278-9c57-7ba9b8c93556-kube-api-access-w9mnr\") pod \"kubernetes-dashboard-779776cb65-zfp7s\" (UID: \"47c9b7b4-16af-4278-9c57-7ba9b8c93556\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-zfp7s"
	Jul 21 23:36:11 functional-044000 kubelet[6532]: I0721 23:36:11.062746    6532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq5j6\" (UniqueName: \"kubernetes.io/projected/e8b04a49-dfe5-458b-a2b5-0509870d74eb-kube-api-access-rq5j6\") pod \"dashboard-metrics-scraper-b5fc48f67-nmbgk\" (UID: \"e8b04a49-dfe5-458b-a2b5-0509870d74eb\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-nmbgk"
	Jul 21 23:36:11 functional-044000 kubelet[6532]: I0721 23:36:11.062757    6532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8b04a49-dfe5-458b-a2b5-0509870d74eb-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-nmbgk\" (UID: \"e8b04a49-dfe5-458b-a2b5-0509870d74eb\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-nmbgk"
	Jul 21 23:36:11 functional-044000 kubelet[6532]: I0721 23:36:11.062767    6532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47c9b7b4-16af-4278-9c57-7ba9b8c93556-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-zfp7s\" (UID: \"47c9b7b4-16af-4278-9c57-7ba9b8c93556\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-zfp7s"
	
	
	==> storage-provisioner [8b72b665292b] <==
	I0721 23:34:03.811323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0721 23:34:03.818141       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0721 23:34:03.818156       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0721 23:34:21.206005       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0721 23:34:21.206076       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-044000_eb99d5c2-0b85-4850-8e85-a19ac67c8fcc!
	I0721 23:34:21.206102       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e9d49dcb-8967-4fb0-977f-0fc581412eb8", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-044000_eb99d5c2-0b85-4850-8e85-a19ac67c8fcc became leader
	I0721 23:34:21.307216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-044000_eb99d5c2-0b85-4850-8e85-a19ac67c8fcc!
	
	
	==> storage-provisioner [fd76c0aacceb] <==
	I0721 23:34:54.178779       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0721 23:34:54.183553       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0721 23:34:54.183574       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0721 23:35:11.571404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0721 23:35:11.571753       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-044000_4747ec6e-7a29-43ff-b897-37cbdffe2ee8!
	I0721 23:35:11.571700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e9d49dcb-8967-4fb0-977f-0fc581412eb8", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-044000_4747ec6e-7a29-43ff-b897-37cbdffe2ee8 became leader
	I0721 23:35:11.672479       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-044000_4747ec6e-7a29-43ff-b897-37cbdffe2ee8!
	I0721 23:35:37.784585       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0721 23:35:37.785040       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"0be53915-83da-4bce-84f3-2da80da8c1a7", APIVersion:"v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0721 23:35:37.784668       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    11249665-c0de-4789-a506-053b4788c32a 345 0 2024-07-21 23:33:03 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-21 23:33:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-0be53915-83da-4bce-84f3-2da80da8c1a7 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  0be53915-83da-4bce-84f3-2da80da8c1a7 758 0 2024-07-21 23:35:37 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-21 23:35:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-21 23:35:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0721 23:35:37.785639       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-0be53915-83da-4bce-84f3-2da80da8c1a7" provisioned
	I0721 23:35:37.785669       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0721 23:35:37.785682       1 volume_store.go:212] Trying to save persistentvolume "pvc-0be53915-83da-4bce-84f3-2da80da8c1a7"
	I0721 23:35:37.789029       1 volume_store.go:219] persistentvolume "pvc-0be53915-83da-4bce-84f3-2da80da8c1a7" saved
	I0721 23:35:37.789479       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"0be53915-83da-4bce-84f3-2da80da8c1a7", APIVersion:"v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-0be53915-83da-4bce-84f3-2da80da8c1a7
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-044000 -n functional-044000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-044000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-nmbgk kubernetes-dashboard-779776cb65-zfp7s
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-044000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-nmbgk kubernetes-dashboard-779776cb65-zfp7s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-044000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-nmbgk kubernetes-dashboard-779776cb65-zfp7s: exit status 1 (40.375667ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-044000/192.168.105.4
	Start Time:       Sun, 21 Jul 2024 16:35:59 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://6d40f2b0eefc839af0ea4a51e4dc00950146609ebd0a372db5a18542c69317b8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 21 Jul 2024 16:36:05 -0700
	      Finished:     Sun, 21 Jul 2024 16:36:05 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9496d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-9496d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/busybox-mount to functional-044000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 5.65s (5.65s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-nmbgk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-zfp7s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-044000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-nmbgk kubernetes-dashboard-779776cb65-zfp7s: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (35.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-736000 node stop m02 -v=7 --alsologtostderr: (12.190175s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr
E0721 16:45:19.040845    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:45:46.743352    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr: exit status 7 (2m55.968492875s)

                                                
                                                
-- stdout --
	ha-736000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-736000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-736000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-736000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:44:24.692553    3502 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:44:24.692894    3502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:44:24.692899    3502 out.go:304] Setting ErrFile to fd 2...
	I0721 16:44:24.692902    3502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:44:24.693032    3502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:44:24.693160    3502 out.go:298] Setting JSON to false
	I0721 16:44:24.693178    3502 mustload.go:65] Loading cluster: ha-736000
	I0721 16:44:24.693236    3502 notify.go:220] Checking for updates...
	I0721 16:44:24.693430    3502 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:44:24.693439    3502 status.go:255] checking status of ha-736000 ...
	I0721 16:44:24.694232    3502 status.go:330] ha-736000 host status = "Running" (err=<nil>)
	I0721 16:44:24.694241    3502 host.go:66] Checking if "ha-736000" exists ...
	I0721 16:44:24.694339    3502 host.go:66] Checking if "ha-736000" exists ...
	I0721 16:44:24.694458    3502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:44:24.694466    3502 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/id_rsa Username:docker}
	W0721 16:44:50.617779    3502 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0721 16:44:50.617902    3502 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0721 16:44:50.617921    3502 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0721 16:44:50.617932    3502 status.go:257] ha-736000 status: &{Name:ha-736000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 16:44:50.617951    3502 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0721 16:44:50.617961    3502 status.go:255] checking status of ha-736000-m02 ...
	I0721 16:44:50.618356    3502 status.go:330] ha-736000-m02 host status = "Stopped" (err=<nil>)
	I0721 16:44:50.618366    3502 status.go:343] host is not running, skipping remaining checks
	I0721 16:44:50.618372    3502 status.go:257] ha-736000-m02 status: &{Name:ha-736000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:44:50.618390    3502 status.go:255] checking status of ha-736000-m03 ...
	I0721 16:44:50.619938    3502 status.go:330] ha-736000-m03 host status = "Running" (err=<nil>)
	I0721 16:44:50.619972    3502 host.go:66] Checking if "ha-736000-m03" exists ...
	I0721 16:44:50.620217    3502 host.go:66] Checking if "ha-736000-m03" exists ...
	I0721 16:44:50.620473    3502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:44:50.620486    3502 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m03/id_rsa Username:docker}
	W0721 16:46:05.621413    3502 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0721 16:46:05.621479    3502 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0721 16:46:05.621501    3502 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0721 16:46:05.621505    3502 status.go:257] ha-736000-m03 status: &{Name:ha-736000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 16:46:05.621518    3502 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0721 16:46:05.621526    3502 status.go:255] checking status of ha-736000-m04 ...
	I0721 16:46:05.622334    3502 status.go:330] ha-736000-m04 host status = "Running" (err=<nil>)
	I0721 16:46:05.622346    3502 host.go:66] Checking if "ha-736000-m04" exists ...
	I0721 16:46:05.622435    3502 host.go:66] Checking if "ha-736000-m04" exists ...
	I0721 16:46:05.622557    3502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:46:05.622566    3502 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m04/id_rsa Username:docker}
	W0721 16:47:20.622667    3502 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0721 16:47:20.622716    3502 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0721 16:47:20.622736    3502 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0721 16:47:20.622740    3502 status.go:257] ha-736000-m04 status: &{Name:ha-736000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0721 16:47:20.622749    3502 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr": ha-736000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-736000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-736000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr": ha-736000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-736000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-736000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr": ha-736000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-736000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-736000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 3 (25.95860475s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 16:47:46.580962    3625 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0721 16:47:46.580973    3625 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0721 16:48:09.335247    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.810787542s)
ha_test.go:413: expected profile "ha-736000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-736000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-736000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-736000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 3 (25.960940459s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 16:49:29.350184    3690 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0721 16:49:29.350200    3690 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (209.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 node start m02 -v=7 --alsologtostderr
E0721 16:49:32.401632    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-736000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.08108275s)

                                                
                                                
-- stdout --
	* Starting "ha-736000-m02" control-plane node in "ha-736000" cluster
	* Restarting existing qemu2 VM for "ha-736000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-736000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:49:29.383565    3708 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:49:29.383869    3708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:49:29.383876    3708 out.go:304] Setting ErrFile to fd 2...
	I0721 16:49:29.383878    3708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:49:29.383996    3708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:49:29.384243    3708 mustload.go:65] Loading cluster: ha-736000
	I0721 16:49:29.384457    3708 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0721 16:49:29.384695    3708 host.go:58] "ha-736000-m02" host status: Stopped
	I0721 16:49:29.388868    3708 out.go:177] * Starting "ha-736000-m02" control-plane node in "ha-736000" cluster
	I0721 16:49:29.391697    3708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 16:49:29.391712    3708 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 16:49:29.391718    3708 cache.go:56] Caching tarball of preloaded images
	I0721 16:49:29.391784    3708 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 16:49:29.391789    3708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 16:49:29.391845    3708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/ha-736000/config.json ...
	I0721 16:49:29.392547    3708 start.go:360] acquireMachinesLock for ha-736000-m02: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 16:49:29.392594    3708 start.go:364] duration metric: took 32.875µs to acquireMachinesLock for "ha-736000-m02"
	I0721 16:49:29.392603    3708 start.go:96] Skipping create...Using existing machine configuration
	I0721 16:49:29.392608    3708 fix.go:54] fixHost starting: m02
	I0721 16:49:29.392745    3708 fix.go:112] recreateIfNeeded on ha-736000-m02: state=Stopped err=<nil>
	W0721 16:49:29.392752    3708 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 16:49:29.395764    3708 out.go:177] * Restarting existing qemu2 VM for "ha-736000-m02" ...
	I0721 16:49:29.398796    3708 qemu.go:418] Using hvf for hardware acceleration
	I0721 16:49:29.398848    3708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c4:91:34:ff:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/disk.qcow2
	I0721 16:49:29.401565    3708 main.go:141] libmachine: STDOUT: 
	I0721 16:49:29.401650    3708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 16:49:29.401674    3708 fix.go:56] duration metric: took 9.067667ms for fixHost
	I0721 16:49:29.401678    3708 start.go:83] releasing machines lock for "ha-736000-m02", held for 9.078583ms
	W0721 16:49:29.401684    3708 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 16:49:29.401718    3708 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 16:49:29.401723    3708 start.go:729] Will try again in 5 seconds ...
	I0721 16:49:34.403645    3708 start.go:360] acquireMachinesLock for ha-736000-m02: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 16:49:34.403791    3708 start.go:364] duration metric: took 124.5µs to acquireMachinesLock for "ha-736000-m02"
	I0721 16:49:34.403833    3708 start.go:96] Skipping create...Using existing machine configuration
	I0721 16:49:34.403837    3708 fix.go:54] fixHost starting: m02
	I0721 16:49:34.403990    3708 fix.go:112] recreateIfNeeded on ha-736000-m02: state=Stopped err=<nil>
	W0721 16:49:34.403995    3708 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 16:49:34.408722    3708 out.go:177] * Restarting existing qemu2 VM for "ha-736000-m02" ...
	I0721 16:49:34.411552    3708 qemu.go:418] Using hvf for hardware acceleration
	I0721 16:49:34.411587    3708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c4:91:34:ff:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/disk.qcow2
	I0721 16:49:34.413613    3708 main.go:141] libmachine: STDOUT: 
	I0721 16:49:34.413673    3708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 16:49:34.413691    3708 fix.go:56] duration metric: took 9.854ms for fixHost
	I0721 16:49:34.413694    3708 start.go:83] releasing machines lock for "ha-736000-m02", held for 9.898042ms
	W0721 16:49:34.413747    3708 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 16:49:34.417683    3708 out.go:177] 
	W0721 16:49:34.421652    3708 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 16:49:34.421658    3708 out.go:239] * 
	* 
	W0721 16:49:34.423438    3708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 16:49:34.427674    3708 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0721 16:49:29.383565    3708 out.go:291] Setting OutFile to fd 1 ...
I0721 16:49:29.383869    3708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:49:29.383876    3708 out.go:304] Setting ErrFile to fd 2...
I0721 16:49:29.383878    3708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:49:29.383996    3708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
I0721 16:49:29.384243    3708 mustload.go:65] Loading cluster: ha-736000
I0721 16:49:29.384457    3708 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0721 16:49:29.384695    3708 host.go:58] "ha-736000-m02" host status: Stopped
I0721 16:49:29.388868    3708 out.go:177] * Starting "ha-736000-m02" control-plane node in "ha-736000" cluster
I0721 16:49:29.391697    3708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0721 16:49:29.391712    3708 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0721 16:49:29.391718    3708 cache.go:56] Caching tarball of preloaded images
I0721 16:49:29.391784    3708 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0721 16:49:29.391789    3708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0721 16:49:29.391845    3708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/ha-736000/config.json ...
I0721 16:49:29.392547    3708 start.go:360] acquireMachinesLock for ha-736000-m02: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0721 16:49:29.392594    3708 start.go:364] duration metric: took 32.875µs to acquireMachinesLock for "ha-736000-m02"
I0721 16:49:29.392603    3708 start.go:96] Skipping create...Using existing machine configuration
I0721 16:49:29.392608    3708 fix.go:54] fixHost starting: m02
I0721 16:49:29.392745    3708 fix.go:112] recreateIfNeeded on ha-736000-m02: state=Stopped err=<nil>
W0721 16:49:29.392752    3708 fix.go:138] unexpected machine state, will restart: <nil>
I0721 16:49:29.395764    3708 out.go:177] * Restarting existing qemu2 VM for "ha-736000-m02" ...
I0721 16:49:29.398796    3708 qemu.go:418] Using hvf for hardware acceleration
I0721 16:49:29.398848    3708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c4:91:34:ff:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/disk.qcow2
I0721 16:49:29.401565    3708 main.go:141] libmachine: STDOUT: 
I0721 16:49:29.401650    3708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0721 16:49:29.401674    3708 fix.go:56] duration metric: took 9.067667ms for fixHost
I0721 16:49:29.401678    3708 start.go:83] releasing machines lock for "ha-736000-m02", held for 9.078583ms
W0721 16:49:29.401684    3708 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0721 16:49:29.401718    3708 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0721 16:49:29.401723    3708 start.go:729] Will try again in 5 seconds ...
I0721 16:49:34.403645    3708 start.go:360] acquireMachinesLock for ha-736000-m02: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0721 16:49:34.403791    3708 start.go:364] duration metric: took 124.5µs to acquireMachinesLock for "ha-736000-m02"
I0721 16:49:34.403833    3708 start.go:96] Skipping create...Using existing machine configuration
I0721 16:49:34.403837    3708 fix.go:54] fixHost starting: m02
I0721 16:49:34.403990    3708 fix.go:112] recreateIfNeeded on ha-736000-m02: state=Stopped err=<nil>
W0721 16:49:34.403995    3708 fix.go:138] unexpected machine state, will restart: <nil>
I0721 16:49:34.408722    3708 out.go:177] * Restarting existing qemu2 VM for "ha-736000-m02" ...
I0721 16:49:34.411552    3708 qemu.go:418] Using hvf for hardware acceleration
I0721 16:49:34.411587    3708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c4:91:34:ff:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m02/disk.qcow2
I0721 16:49:34.413613    3708 main.go:141] libmachine: STDOUT: 
I0721 16:49:34.413673    3708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0721 16:49:34.413691    3708 fix.go:56] duration metric: took 9.854ms for fixHost
I0721 16:49:34.413694    3708 start.go:83] releasing machines lock for "ha-736000-m02", held for 9.898042ms
W0721 16:49:34.413747    3708 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0721 16:49:34.417683    3708 out.go:177] 
W0721 16:49:34.421652    3708 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0721 16:49:34.421658    3708 out.go:239] * 
* 
W0721 16:49:34.423438    3708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0721 16:49:34.427674    3708 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-736000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr
E0721 16:50:19.032516    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr: exit status 7 (2m57.938478s)

                                                
                                                
-- stdout --
	ha-736000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-736000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-736000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-736000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:49:34.463071    3715 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:49:34.463236    3715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:49:34.463240    3715 out.go:304] Setting ErrFile to fd 2...
	I0721 16:49:34.463242    3715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:49:34.463401    3715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:49:34.463526    3715 out.go:298] Setting JSON to false
	I0721 16:49:34.463543    3715 mustload.go:65] Loading cluster: ha-736000
	I0721 16:49:34.463577    3715 notify.go:220] Checking for updates...
	I0721 16:49:34.463767    3715 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:49:34.463774    3715 status.go:255] checking status of ha-736000 ...
	I0721 16:49:34.464522    3715 status.go:330] ha-736000 host status = "Running" (err=<nil>)
	I0721 16:49:34.464533    3715 host.go:66] Checking if "ha-736000" exists ...
	I0721 16:49:34.464639    3715 host.go:66] Checking if "ha-736000" exists ...
	I0721 16:49:34.464754    3715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:49:34.464763    3715 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/id_rsa Username:docker}
	W0721 16:49:34.464948    3715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0721 16:49:34.464967    3715 retry.go:31] will retry after 225.872908ms: dial tcp 192.168.105.5:22: connect: host is down
	W0721 16:49:34.692989    3715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0721 16:49:34.693009    3715 retry.go:31] will retry after 258.52241ms: dial tcp 192.168.105.5:22: connect: host is down
	W0721 16:49:34.953706    3715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0721 16:49:34.953734    3715 retry.go:31] will retry after 423.383787ms: dial tcp 192.168.105.5:22: connect: host is down
	W0721 16:49:35.379283    3715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0721 16:49:35.379313    3715 retry.go:31] will retry after 1.059940563s: dial tcp 192.168.105.5:22: connect: host is down
	W0721 16:50:02.361314    3715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0721 16:50:02.361370    3715 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0721 16:50:02.361379    3715 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0721 16:50:02.361426    3715 status.go:257] ha-736000 status: &{Name:ha-736000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 16:50:02.361438    3715 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0721 16:50:02.361443    3715 status.go:255] checking status of ha-736000-m02 ...
	I0721 16:50:02.361698    3715 status.go:330] ha-736000-m02 host status = "Stopped" (err=<nil>)
	I0721 16:50:02.361703    3715 status.go:343] host is not running, skipping remaining checks
	I0721 16:50:02.361705    3715 status.go:257] ha-736000-m02 status: &{Name:ha-736000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:50:02.361709    3715 status.go:255] checking status of ha-736000-m03 ...
	I0721 16:50:02.362303    3715 status.go:330] ha-736000-m03 host status = "Running" (err=<nil>)
	I0721 16:50:02.362309    3715 host.go:66] Checking if "ha-736000-m03" exists ...
	I0721 16:50:02.362407    3715 host.go:66] Checking if "ha-736000-m03" exists ...
	I0721 16:50:02.362517    3715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:50:02.362524    3715 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m03/id_rsa Username:docker}
	W0721 16:51:17.361111    3715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0721 16:51:17.361345    3715 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0721 16:51:17.361392    3715 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0721 16:51:17.361407    3715 status.go:257] ha-736000-m03 status: &{Name:ha-736000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 16:51:17.361441    3715 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0721 16:51:17.361456    3715 status.go:255] checking status of ha-736000-m04 ...
	I0721 16:51:17.364068    3715 status.go:330] ha-736000-m04 host status = "Running" (err=<nil>)
	I0721 16:51:17.364094    3715 host.go:66] Checking if "ha-736000-m04" exists ...
	I0721 16:51:17.364581    3715 host.go:66] Checking if "ha-736000-m04" exists ...
	I0721 16:51:17.364991    3715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:51:17.365013    3715 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000-m04/id_rsa Username:docker}
	W0721 16:52:32.363524    3715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0721 16:52:32.363641    3715 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0721 16:52:32.363667    3715 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0721 16:52:32.363679    3715 status.go:257] ha-736000-m04 status: &{Name:ha-736000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0721 16:52:32.363704    3715 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 3 (25.984116208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 16:52:58.344822    3820 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0721 16:52:58.344864    3820 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (209.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-736000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-736000 -v=7 --alsologtostderr
E0721 16:55:19.024280    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:56:42.087745    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-736000 -v=7 --alsologtostderr: (3m49.025249875s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-736000 --wait=true -v=7 --alsologtostderr
E0721 16:58:09.319269    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-736000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.2279815s)

                                                
                                                
-- stdout --
	* [ha-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-736000" primary control-plane node in "ha-736000" cluster
	* Restarting existing qemu2 VM for "ha-736000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-736000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:58:05.486143    4073 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:58:05.486314    4073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:58:05.486319    4073 out.go:304] Setting ErrFile to fd 2...
	I0721 16:58:05.486323    4073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:58:05.486502    4073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:58:05.487865    4073 out.go:298] Setting JSON to false
	I0721 16:58:05.508418    4073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3448,"bootTime":1721602837,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 16:58:05.508516    4073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:58:05.512537    4073 out.go:177] * [ha-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 16:58:05.519374    4073 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 16:58:05.519412    4073 notify.go:220] Checking for updates...
	I0721 16:58:05.525279    4073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 16:58:05.528282    4073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 16:58:05.531313    4073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:58:05.534276    4073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 16:58:05.537329    4073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 16:58:05.540550    4073 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:58:05.540600    4073 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:58:05.545227    4073 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 16:58:05.551175    4073 start.go:297] selected driver: qemu2
	I0721 16:58:05.551182    4073 start.go:901] validating driver "qemu2" against &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-736000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:58:05.551292    4073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 16:58:05.553959    4073 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 16:58:05.553984    4073 cni.go:84] Creating CNI manager for ""
	I0721 16:58:05.553988    4073 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0721 16:58:05.554042    4073 start.go:340] cluster config:
	{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-736000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:58:05.558172    4073 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 16:58:05.566278    4073 out.go:177] * Starting "ha-736000" primary control-plane node in "ha-736000" cluster
	I0721 16:58:05.570267    4073 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 16:58:05.570284    4073 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 16:58:05.570298    4073 cache.go:56] Caching tarball of preloaded images
	I0721 16:58:05.570359    4073 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 16:58:05.570365    4073 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 16:58:05.570450    4073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/ha-736000/config.json ...
	I0721 16:58:05.570868    4073 start.go:360] acquireMachinesLock for ha-736000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 16:58:05.570903    4073 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "ha-736000"
	I0721 16:58:05.570912    4073 start.go:96] Skipping create...Using existing machine configuration
	I0721 16:58:05.570918    4073 fix.go:54] fixHost starting: 
	I0721 16:58:05.571044    4073 fix.go:112] recreateIfNeeded on ha-736000: state=Stopped err=<nil>
	W0721 16:58:05.571052    4073 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 16:58:05.578903    4073 out.go:177] * Restarting existing qemu2 VM for "ha-736000" ...
	I0721 16:58:05.587305    4073 qemu.go:418] Using hvf for hardware acceleration
	I0721 16:58:05.587356    4073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c8:81:83:de:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/disk.qcow2
	I0721 16:58:05.589490    4073 main.go:141] libmachine: STDOUT: 
	I0721 16:58:05.589511    4073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 16:58:05.589539    4073 fix.go:56] duration metric: took 18.621875ms for fixHost
	I0721 16:58:05.589543    4073 start.go:83] releasing machines lock for "ha-736000", held for 18.636167ms
	W0721 16:58:05.589550    4073 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 16:58:05.589585    4073 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 16:58:05.589590    4073 start.go:729] Will try again in 5 seconds ...
	I0721 16:58:10.591786    4073 start.go:360] acquireMachinesLock for ha-736000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 16:58:10.592300    4073 start.go:364] duration metric: took 359.208µs to acquireMachinesLock for "ha-736000"
	I0721 16:58:10.592434    4073 start.go:96] Skipping create...Using existing machine configuration
	I0721 16:58:10.592458    4073 fix.go:54] fixHost starting: 
	I0721 16:58:10.593168    4073 fix.go:112] recreateIfNeeded on ha-736000: state=Stopped err=<nil>
	W0721 16:58:10.593200    4073 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 16:58:10.597763    4073 out.go:177] * Restarting existing qemu2 VM for "ha-736000" ...
	I0721 16:58:10.605616    4073 qemu.go:418] Using hvf for hardware acceleration
	I0721 16:58:10.605807    4073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c8:81:83:de:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/disk.qcow2
	I0721 16:58:10.615410    4073 main.go:141] libmachine: STDOUT: 
	I0721 16:58:10.615476    4073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 16:58:10.615572    4073 fix.go:56] duration metric: took 23.120792ms for fixHost
	I0721 16:58:10.615595    4073 start.go:83] releasing machines lock for "ha-736000", held for 23.265166ms
	W0721 16:58:10.615748    4073 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 16:58:10.623492    4073 out.go:177] 
	W0721 16:58:10.627686    4073 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 16:58:10.627782    4073 out.go:239] * 
	* 
	W0721 16:58:10.630572    4073 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 16:58:10.637706    4073 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-736000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-736000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 7 (33.777708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-736000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.56025ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-736000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-736000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:58:10.776836    4088 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:58:10.777094    4088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:58:10.777097    4088 out.go:304] Setting ErrFile to fd 2...
	I0721 16:58:10.777099    4088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:58:10.777232    4088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:58:10.777470    4088 mustload.go:65] Loading cluster: ha-736000
	I0721 16:58:10.777674    4088 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0721 16:58:10.777993    4088 out.go:239] ! The control-plane node ha-736000 host is not running (will try others): state=Stopped
	! The control-plane node ha-736000 host is not running (will try others): state=Stopped
	W0721 16:58:10.778101    4088 out.go:239] ! The control-plane node ha-736000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-736000-m02 host is not running (will try others): state=Stopped
	I0721 16:58:10.781512    4088 out.go:177] * The control-plane node ha-736000-m03 host is not running: state=Stopped
	I0721 16:58:10.784466    4088 out.go:177]   To start a cluster, run: "minikube start -p ha-736000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-736000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr: exit status 7 (29.345709ms)

                                                
                                                
-- stdout --
	ha-736000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-736000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-736000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-736000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:58:10.815731    4090 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:58:10.815868    4090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:58:10.815871    4090 out.go:304] Setting ErrFile to fd 2...
	I0721 16:58:10.815873    4090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:58:10.816035    4090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:58:10.816148    4090 out.go:298] Setting JSON to false
	I0721 16:58:10.816157    4090 mustload.go:65] Loading cluster: ha-736000
	I0721 16:58:10.816230    4090 notify.go:220] Checking for updates...
	I0721 16:58:10.816393    4090 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:58:10.816399    4090 status.go:255] checking status of ha-736000 ...
	I0721 16:58:10.816621    4090 status.go:330] ha-736000 host status = "Stopped" (err=<nil>)
	I0721 16:58:10.816625    4090 status.go:343] host is not running, skipping remaining checks
	I0721 16:58:10.816627    4090 status.go:257] ha-736000 status: &{Name:ha-736000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:58:10.816637    4090 status.go:255] checking status of ha-736000-m02 ...
	I0721 16:58:10.816723    4090 status.go:330] ha-736000-m02 host status = "Stopped" (err=<nil>)
	I0721 16:58:10.816726    4090 status.go:343] host is not running, skipping remaining checks
	I0721 16:58:10.816727    4090 status.go:257] ha-736000-m02 status: &{Name:ha-736000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:58:10.816731    4090 status.go:255] checking status of ha-736000-m03 ...
	I0721 16:58:10.816826    4090 status.go:330] ha-736000-m03 host status = "Stopped" (err=<nil>)
	I0721 16:58:10.816828    4090 status.go:343] host is not running, skipping remaining checks
	I0721 16:58:10.816830    4090 status.go:257] ha-736000-m03 status: &{Name:ha-736000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:58:10.816833    4090 status.go:255] checking status of ha-736000-m04 ...
	I0721 16:58:10.816934    4090 status.go:330] ha-736000-m04 host status = "Stopped" (err=<nil>)
	I0721 16:58:10.816940    4090 status.go:343] host is not running, skipping remaining checks
	I0721 16:58:10.816942    4090 status.go:257] ha-736000-m04 status: &{Name:ha-736000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 7 (29.974583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-736000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-736000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-736000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-736000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 7 (49.447417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 stop -v=7 --alsologtostderr
E0721 17:00:19.016335    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-736000 stop -v=7 --alsologtostderr: (3m21.994055s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr: exit status 7 (70.163833ms)

                                                
                                                
-- stdout --
	ha-736000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-736000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-736000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-736000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:01:33.933765    4513 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:01:33.933935    4513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:01:33.933940    4513 out.go:304] Setting ErrFile to fd 2...
	I0721 17:01:33.933943    4513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:01:33.934126    4513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:01:33.934286    4513 out.go:298] Setting JSON to false
	I0721 17:01:33.934300    4513 mustload.go:65] Loading cluster: ha-736000
	I0721 17:01:33.934342    4513 notify.go:220] Checking for updates...
	I0721 17:01:33.934611    4513 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:01:33.934619    4513 status.go:255] checking status of ha-736000 ...
	I0721 17:01:33.934906    4513 status.go:330] ha-736000 host status = "Stopped" (err=<nil>)
	I0721 17:01:33.934911    4513 status.go:343] host is not running, skipping remaining checks
	I0721 17:01:33.934914    4513 status.go:257] ha-736000 status: &{Name:ha-736000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 17:01:33.934928    4513 status.go:255] checking status of ha-736000-m02 ...
	I0721 17:01:33.935064    4513 status.go:330] ha-736000-m02 host status = "Stopped" (err=<nil>)
	I0721 17:01:33.935069    4513 status.go:343] host is not running, skipping remaining checks
	I0721 17:01:33.935072    4513 status.go:257] ha-736000-m02 status: &{Name:ha-736000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 17:01:33.935078    4513 status.go:255] checking status of ha-736000-m03 ...
	I0721 17:01:33.935206    4513 status.go:330] ha-736000-m03 host status = "Stopped" (err=<nil>)
	I0721 17:01:33.935210    4513 status.go:343] host is not running, skipping remaining checks
	I0721 17:01:33.935213    4513 status.go:257] ha-736000-m03 status: &{Name:ha-736000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 17:01:33.935221    4513 status.go:255] checking status of ha-736000-m04 ...
	I0721 17:01:33.935342    4513 status.go:330] ha-736000-m04 host status = "Stopped" (err=<nil>)
	I0721 17:01:33.935346    4513 status.go:343] host is not running, skipping remaining checks
	I0721 17:01:33.935349    4513 status.go:257] ha-736000-m04 status: &{Name:ha-736000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr": ha-736000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr": ha-736000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr": ha-736000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-736000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 7 (32.503958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-736000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-736000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.181007417s)

                                                
                                                
-- stdout --
	* [ha-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-736000" primary control-plane node in "ha-736000" cluster
	* Restarting existing qemu2 VM for "ha-736000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-736000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:01:33.996475    4517 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:01:33.996618    4517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:01:33.996622    4517 out.go:304] Setting ErrFile to fd 2...
	I0721 17:01:33.996624    4517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:01:33.996756    4517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:01:33.997911    4517 out.go:298] Setting JSON to false
	I0721 17:01:34.015005    4517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3657,"bootTime":1721602837,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:01:34.015079    4517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:01:34.019994    4517 out.go:177] * [ha-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:01:34.025900    4517 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:01:34.025945    4517 notify.go:220] Checking for updates...
	I0721 17:01:34.032886    4517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:01:34.035928    4517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:01:34.038945    4517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:01:34.041796    4517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:01:34.044851    4517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:01:34.048285    4517 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:01:34.048546    4517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:01:34.051861    4517 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:01:34.058954    4517 start.go:297] selected driver: qemu2
	I0721 17:01:34.058963    4517 start.go:901] validating driver "qemu2" against &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-736000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:01:34.059034    4517 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:01:34.061287    4517 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:01:34.061314    4517 cni.go:84] Creating CNI manager for ""
	I0721 17:01:34.061319    4517 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0721 17:01:34.061364    4517 start.go:340] cluster config:
	{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-736000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:01:34.064795    4517 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:01:34.072878    4517 out.go:177] * Starting "ha-736000" primary control-plane node in "ha-736000" cluster
	I0721 17:01:34.076908    4517 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:01:34.076926    4517 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:01:34.076936    4517 cache.go:56] Caching tarball of preloaded images
	I0721 17:01:34.076996    4517 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:01:34.077002    4517 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:01:34.077060    4517 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/ha-736000/config.json ...
	I0721 17:01:34.077382    4517 start.go:360] acquireMachinesLock for ha-736000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:01:34.077417    4517 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "ha-736000"
	I0721 17:01:34.077425    4517 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:01:34.077431    4517 fix.go:54] fixHost starting: 
	I0721 17:01:34.077537    4517 fix.go:112] recreateIfNeeded on ha-736000: state=Stopped err=<nil>
	W0721 17:01:34.077545    4517 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:01:34.081925    4517 out.go:177] * Restarting existing qemu2 VM for "ha-736000" ...
	I0721 17:01:34.089895    4517 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:01:34.089931    4517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c8:81:83:de:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/disk.qcow2
	I0721 17:01:34.091898    4517 main.go:141] libmachine: STDOUT: 
	I0721 17:01:34.091917    4517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:01:34.091949    4517 fix.go:56] duration metric: took 14.517042ms for fixHost
	I0721 17:01:34.091954    4517 start.go:83] releasing machines lock for "ha-736000", held for 14.533209ms
	W0721 17:01:34.091959    4517 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:01:34.091993    4517 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:01:34.091998    4517 start.go:729] Will try again in 5 seconds ...
	I0721 17:01:39.094054    4517 start.go:360] acquireMachinesLock for ha-736000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:01:39.094532    4517 start.go:364] duration metric: took 356.875µs to acquireMachinesLock for "ha-736000"
	I0721 17:01:39.094703    4517 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:01:39.094722    4517 fix.go:54] fixHost starting: 
	I0721 17:01:39.095491    4517 fix.go:112] recreateIfNeeded on ha-736000: state=Stopped err=<nil>
	W0721 17:01:39.095519    4517 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:01:39.100109    4517 out.go:177] * Restarting existing qemu2 VM for "ha-736000" ...
	I0721 17:01:39.108070    4517 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:01:39.108374    4517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c8:81:83:de:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/ha-736000/disk.qcow2
	I0721 17:01:39.117621    4517 main.go:141] libmachine: STDOUT: 
	I0721 17:01:39.117683    4517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:01:39.117780    4517 fix.go:56] duration metric: took 23.05675ms for fixHost
	I0721 17:01:39.117800    4517 start.go:83] releasing machines lock for "ha-736000", held for 23.218084ms
	W0721 17:01:39.117953    4517 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-736000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:01:39.123014    4517 out.go:177] 
	W0721 17:01:39.127114    4517 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:01:39.127190    4517 out.go:239] * 
	* 
	W0721 17:01:39.129831    4517 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:01:39.141953    4517 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-736000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 7 (70.201125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-736000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-736000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-736000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-736000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 7 (29.71375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-736000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-736000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.946792ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-736000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-736000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:01:39.330946    4536 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:01:39.331326    4536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:01:39.331330    4536 out.go:304] Setting ErrFile to fd 2...
	I0721 17:01:39.331333    4536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:01:39.331508    4536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:01:39.331737    4536 mustload.go:65] Loading cluster: ha-736000
	I0721 17:01:39.331960    4536 config.go:182] Loaded profile config "ha-736000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0721 17:01:39.332253    4536 out.go:239] ! The control-plane node ha-736000 host is not running (will try others): state=Stopped
	! The control-plane node ha-736000 host is not running (will try others): state=Stopped
	W0721 17:01:39.332358    4536 out.go:239] ! The control-plane node ha-736000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-736000-m02 host is not running (will try others): state=Stopped
	I0721 17:01:39.335353    4536 out.go:177] * The control-plane node ha-736000-m03 host is not running: state=Stopped
	I0721 17:01:39.339106    4536 out.go:177]   To start a cluster, run: "minikube start -p ha-736000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-736000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-736000 -n ha-736000: exit status 7 (29.194417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-736000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-433000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-433000 --driver=qemu2 : exit status 80 (10.100700458s)

                                                
                                                
-- stdout --
	* [image-433000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-433000" primary control-plane node in "image-433000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-433000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-433000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-433000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-433000 -n image-433000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-433000 -n image-433000: exit status 7 (67.336708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-433000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.17s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-930000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-930000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.925377542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0da6cca0-d253-42e0-b19c-6545c6bbea37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-930000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"61a5909a-4625-4bae-9e73-1f11fdeeaa0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"6aaee8d4-8a97-4948-b95c-402864e8d9e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig"}}
	{"specversion":"1.0","id":"bdc62f5b-d849-4120-b5de-24a29feaec36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"157737e3-1ebf-47d9-85b1-ec59e8ca7216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5a211926-8966-4805-8ad7-15804aae99d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube"}}
	{"specversion":"1.0","id":"40464b73-4d52-4f72-9519-7cf2b05d5b02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"916c9b2c-9185-41c8-be7b-a5730d8e3429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8abb364f-1a73-4801-8289-80c561dfd90e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7e9bbb1f-3844-41ea-8bcd-f07b49b03a4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-930000\" primary control-plane node in \"json-output-930000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c762a9f-c9e7-411f-aa80-4f6c3042df5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2b1338a3-1fec-48b0-b59e-eb96a54706fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-930000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6bb079f-bf04-4f7a-bc85-9821541affa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"d51a99a3-7930-412a-acdf-f6b3189b78a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"eba0c410-7c11-46c3-9abe-2794d76dc8b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-930000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"2a6eb7bf-602f-414d-b67b-ab997010f317","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bbaf8de3-8bd8-4fbe-91bf-b6882d5fe591","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-930000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.93s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-930000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-930000 --output=json --user=testUser: exit status 83 (77.213542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5319b52-e446-48bb-a4e6-23d67342f7bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-930000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"09bd2a97-3fa2-473e-8c24-9a46e26543c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-930000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-930000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-930000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-930000 --output=json --user=testUser: exit status 83 (45.245708ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-930000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-930000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-930000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-930000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-438000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-438000 --driver=qemu2 : exit status 80 (9.850324291s)

                                                
                                                
-- stdout --
	* [first-438000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-438000" primary control-plane node in "first-438000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-438000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-438000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-21 17:02:14.227101 -0700 PDT m=+2304.777947751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-440000 -n second-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-440000 -n second-440000: exit status 85 (78.911542ms)

                                                
                                                
-- stdout --
	* Profile "second-440000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-440000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-440000" host is not running, skipping log retrieval (state="* Profile \"second-440000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-440000\"")
helpers_test.go:175: Cleaning up "second-440000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-440000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-21 17:02:14.409126 -0700 PDT m=+2304.959977043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-438000 -n first-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-438000 -n first-438000: exit status 7 (29.686791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-438000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-438000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-438000
--- FAIL: TestMinikubeProfile (10.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-281000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-281000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.939682917s)

                                                
                                                
-- stdout --
	* [mount-start-1-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-281000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-281000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-281000 -n mount-start-1-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-281000 -n mount-start-1-281000: exit status 7 (67.013417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.71455825s)

                                                
                                                
-- stdout --
	* [multinode-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-386000" primary control-plane node in "multinode-386000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-386000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:02:24.727232    4705 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:02:24.727357    4705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:02:24.727362    4705 out.go:304] Setting ErrFile to fd 2...
	I0721 17:02:24.727364    4705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:02:24.727512    4705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:02:24.728529    4705 out.go:298] Setting JSON to false
	I0721 17:02:24.744671    4705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3707,"bootTime":1721602837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:02:24.744736    4705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:02:24.750510    4705 out.go:177] * [multinode-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:02:24.758561    4705 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:02:24.758612    4705 notify.go:220] Checking for updates...
	I0721 17:02:24.766547    4705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:02:24.769544    4705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:02:24.772551    4705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:02:24.775483    4705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:02:24.778516    4705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:02:24.781688    4705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:02:24.785525    4705 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:02:24.791427    4705 start.go:297] selected driver: qemu2
	I0721 17:02:24.791435    4705 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:02:24.791441    4705 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:02:24.793701    4705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:02:24.796530    4705 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:02:24.799662    4705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:02:24.799676    4705 cni.go:84] Creating CNI manager for ""
	I0721 17:02:24.799681    4705 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0721 17:02:24.799688    4705 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0721 17:02:24.799722    4705 start.go:340] cluster config:
	{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:02:24.803464    4705 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:02:24.810457    4705 out.go:177] * Starting "multinode-386000" primary control-plane node in "multinode-386000" cluster
	I0721 17:02:24.814538    4705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:02:24.814554    4705 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:02:24.814565    4705 cache.go:56] Caching tarball of preloaded images
	I0721 17:02:24.814633    4705 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:02:24.814638    4705 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:02:24.814833    4705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/multinode-386000/config.json ...
	I0721 17:02:24.814847    4705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/multinode-386000/config.json: {Name:mk2b78a242e1659f972b481684bb37d8cda3cd27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:02:24.815058    4705 start.go:360] acquireMachinesLock for multinode-386000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:02:24.815092    4705 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "multinode-386000"
	I0721 17:02:24.815103    4705 start.go:93] Provisioning new machine with config: &{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:02:24.815136    4705 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:02:24.823522    4705 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:02:24.840813    4705 start.go:159] libmachine.API.Create for "multinode-386000" (driver="qemu2")
	I0721 17:02:24.840844    4705 client.go:168] LocalClient.Create starting
	I0721 17:02:24.840913    4705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:02:24.840946    4705 main.go:141] libmachine: Decoding PEM data...
	I0721 17:02:24.840955    4705 main.go:141] libmachine: Parsing certificate...
	I0721 17:02:24.841001    4705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:02:24.841026    4705 main.go:141] libmachine: Decoding PEM data...
	I0721 17:02:24.841034    4705 main.go:141] libmachine: Parsing certificate...
	I0721 17:02:24.841388    4705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:02:24.980921    4705 main.go:141] libmachine: Creating SSH key...
	I0721 17:02:25.019588    4705 main.go:141] libmachine: Creating Disk image...
	I0721 17:02:25.019593    4705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:02:25.019755    4705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:02:25.029018    4705 main.go:141] libmachine: STDOUT: 
	I0721 17:02:25.029037    4705 main.go:141] libmachine: STDERR: 
	I0721 17:02:25.029085    4705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2 +20000M
	I0721 17:02:25.036967    4705 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:02:25.036988    4705 main.go:141] libmachine: STDERR: 
	I0721 17:02:25.037008    4705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:02:25.037013    4705 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:02:25.037021    4705 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:02:25.037055    4705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:f7:f6:a2:8d:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:02:25.038753    4705 main.go:141] libmachine: STDOUT: 
	I0721 17:02:25.038772    4705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:02:25.038791    4705 client.go:171] duration metric: took 197.949875ms to LocalClient.Create
	I0721 17:02:27.040931    4705 start.go:128] duration metric: took 2.225830834s to createHost
	I0721 17:02:27.041003    4705 start.go:83] releasing machines lock for "multinode-386000", held for 2.225963875s
	W0721 17:02:27.041074    4705 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:02:27.056406    4705 out.go:177] * Deleting "multinode-386000" in qemu2 ...
	W0721 17:02:27.082220    4705 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:02:27.082267    4705 start.go:729] Will try again in 5 seconds ...
	I0721 17:02:32.084309    4705 start.go:360] acquireMachinesLock for multinode-386000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:02:32.084812    4705 start.go:364] duration metric: took 408.834µs to acquireMachinesLock for "multinode-386000"
	I0721 17:02:32.084983    4705 start.go:93] Provisioning new machine with config: &{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:02:32.085332    4705 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:02:32.100929    4705 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:02:32.153610    4705 start.go:159] libmachine.API.Create for "multinode-386000" (driver="qemu2")
	I0721 17:02:32.153655    4705 client.go:168] LocalClient.Create starting
	I0721 17:02:32.153774    4705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:02:32.153836    4705 main.go:141] libmachine: Decoding PEM data...
	I0721 17:02:32.153851    4705 main.go:141] libmachine: Parsing certificate...
	I0721 17:02:32.153910    4705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:02:32.153954    4705 main.go:141] libmachine: Decoding PEM data...
	I0721 17:02:32.153965    4705 main.go:141] libmachine: Parsing certificate...
	I0721 17:02:32.154517    4705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:02:32.305223    4705 main.go:141] libmachine: Creating SSH key...
	I0721 17:02:32.349441    4705 main.go:141] libmachine: Creating Disk image...
	I0721 17:02:32.349446    4705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:02:32.349605    4705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:02:32.358743    4705 main.go:141] libmachine: STDOUT: 
	I0721 17:02:32.358763    4705 main.go:141] libmachine: STDERR: 
	I0721 17:02:32.358819    4705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2 +20000M
	I0721 17:02:32.366633    4705 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:02:32.366649    4705 main.go:141] libmachine: STDERR: 
	I0721 17:02:32.366659    4705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:02:32.366665    4705 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:02:32.366674    4705 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:02:32.366710    4705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6a:56:b1:61:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:02:32.368303    4705 main.go:141] libmachine: STDOUT: 
	I0721 17:02:32.368319    4705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:02:32.368330    4705 client.go:171] duration metric: took 214.675667ms to LocalClient.Create
	I0721 17:02:34.370443    4705 start.go:128] duration metric: took 2.285146916s to createHost
	I0721 17:02:34.370565    4705 start.go:83] releasing machines lock for "multinode-386000", held for 2.285723417s
	W0721 17:02:34.370956    4705 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:02:34.380641    4705 out.go:177] 
	W0721 17:02:34.387663    4705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:02:34.387712    4705 out.go:239] * 
	* 
	W0721 17:02:34.390245    4705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:02:34.399589    4705 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-386000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (67.503208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (74.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (129.291625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-386000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- rollout status deployment/busybox: exit status 1 (58.030583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.178ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.562458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.075333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.6735ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.71825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.964958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.213709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.721417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0721 17:03:09.310383    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.385625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.220709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.170625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.933417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.407125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.573833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (29.390625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (74.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.269833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (29.680458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-386000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-386000 -v 3 --alsologtostderr: exit status 83 (39.837834ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-386000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-386000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:49.392485    4818 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:49.392652    4818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.392656    4818 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:49.392658    4818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.392789    4818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:49.393025    4818 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:49.393200    4818 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:49.397395    4818 out.go:177] * The control-plane node multinode-386000 host is not running: state=Stopped
	I0721 17:03:49.400200    4818 out.go:177]   To start a cluster, run: "minikube start -p multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-386000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (28.53625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-386000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-386000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.827208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-386000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-386000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-386000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (29.039208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-386000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-386000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-386000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-386000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (29.048667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status --output json --alsologtostderr: exit status 7 (29.013667ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-386000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:49.594743    4830 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:49.594896    4830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.594900    4830 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:49.594902    4830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.595027    4830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:49.595145    4830 out.go:298] Setting JSON to true
	I0721 17:03:49.595155    4830 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:49.595207    4830 notify.go:220] Checking for updates...
	I0721 17:03:49.595360    4830 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:49.595373    4830 status.go:255] checking status of multinode-386000 ...
	I0721 17:03:49.595589    4830 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:03:49.595592    4830 status.go:343] host is not running, skipping remaining checks
	I0721 17:03:49.595594    4830 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-386000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (29.961375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 node stop m03: exit status 85 (42.0515ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-386000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status: exit status 7 (29.563208ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr: exit status 7 (29.823167ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:49.726890    4838 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:49.727040    4838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.727044    4838 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:49.727047    4838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.727178    4838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:49.727303    4838 out.go:298] Setting JSON to false
	I0721 17:03:49.727313    4838 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:49.727379    4838 notify.go:220] Checking for updates...
	I0721 17:03:49.727500    4838 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:49.727506    4838 status.go:255] checking status of multinode-386000 ...
	I0721 17:03:49.727720    4838 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:03:49.727725    4838 status.go:343] host is not running, skipping remaining checks
	I0721 17:03:49.727727    4838 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr": multinode-386000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (28.538667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.633583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:49.784715    4842 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:49.784923    4842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.784926    4842 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:49.784928    4842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.785079    4842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:49.785311    4842 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:49.785501    4842 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:49.790242    4842 out.go:177] 
	W0721 17:03:49.793214    4842 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0721 17:03:49.793219    4842 out.go:239] * 
	* 
	W0721 17:03:49.794898    4842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:03:49.798106    4842 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0721 17:03:49.784715    4842 out.go:291] Setting OutFile to fd 1 ...
I0721 17:03:49.784923    4842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 17:03:49.784926    4842 out.go:304] Setting ErrFile to fd 2...
I0721 17:03:49.784928    4842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 17:03:49.785079    4842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
I0721 17:03:49.785311    4842 mustload.go:65] Loading cluster: multinode-386000
I0721 17:03:49.785501    4842 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 17:03:49.790242    4842 out.go:177] 
W0721 17:03:49.793214    4842 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0721 17:03:49.793219    4842 out.go:239] * 
* 
W0721 17:03:49.794898    4842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0721 17:03:49.798106    4842 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-386000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (29.316417ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:49.829798    4844 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:49.829947    4844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.829950    4844 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:49.829952    4844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:49.830093    4844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:49.830211    4844 out.go:298] Setting JSON to false
	I0721 17:03:49.830221    4844 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:49.830272    4844 notify.go:220] Checking for updates...
	I0721 17:03:49.830397    4844 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:49.830403    4844 status.go:255] checking status of multinode-386000 ...
	I0721 17:03:49.830623    4844 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:03:49.830626    4844 status.go:343] host is not running, skipping remaining checks
	I0721 17:03:49.830629    4844 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (73.419416ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:50.791126    4846 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:50.791337    4846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:50.791341    4846 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:50.791344    4846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:50.791534    4846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:50.791692    4846 out.go:298] Setting JSON to false
	I0721 17:03:50.791704    4846 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:50.791742    4846 notify.go:220] Checking for updates...
	I0721 17:03:50.791962    4846 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:50.791970    4846 status.go:255] checking status of multinode-386000 ...
	I0721 17:03:50.792236    4846 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:03:50.792241    4846 status.go:343] host is not running, skipping remaining checks
	I0721 17:03:50.792251    4846 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (73.271333ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:52.082765    4850 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:52.082949    4850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:52.082953    4850 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:52.082956    4850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:52.083153    4850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:52.083319    4850 out.go:298] Setting JSON to false
	I0721 17:03:52.083333    4850 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:52.083381    4850 notify.go:220] Checking for updates...
	I0721 17:03:52.083593    4850 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:52.083601    4850 status.go:255] checking status of multinode-386000 ...
	I0721 17:03:52.083877    4850 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:03:52.083881    4850 status.go:343] host is not running, skipping remaining checks
	I0721 17:03:52.083884    4850 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (72.453584ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:53.873222    4854 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:53.873416    4854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:53.873421    4854 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:53.873424    4854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:53.873576    4854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:53.873734    4854 out.go:298] Setting JSON to false
	I0721 17:03:53.873747    4854 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:53.873776    4854 notify.go:220] Checking for updates...
	I0721 17:03:53.874004    4854 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:53.874012    4854 status.go:255] checking status of multinode-386000 ...
	I0721 17:03:53.874314    4854 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:03:53.874318    4854 status.go:343] host is not running, skipping remaining checks
	I0721 17:03:53.874321    4854 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (71.924167ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:03:56.350762    4858 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:03:56.350953    4858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:56.350958    4858 out.go:304] Setting ErrFile to fd 2...
	I0721 17:03:56.350961    4858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:03:56.351110    4858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:03:56.351276    4858 out.go:298] Setting JSON to false
	I0721 17:03:56.351289    4858 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:03:56.351327    4858 notify.go:220] Checking for updates...
	I0721 17:03:56.351543    4858 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:03:56.351551    4858 status.go:255] checking status of multinode-386000 ...
	I0721 17:03:56.351818    4858 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:03:56.351823    4858 status.go:343] host is not running, skipping remaining checks
	I0721 17:03:56.351826    4858 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (72.8435ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:04:02.966340    4860 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:04:02.966534    4860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:02.966542    4860 out.go:304] Setting ErrFile to fd 2...
	I0721 17:04:02.966546    4860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:02.966742    4860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:04:02.966933    4860 out.go:298] Setting JSON to false
	I0721 17:04:02.966950    4860 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:04:02.966997    4860 notify.go:220] Checking for updates...
	I0721 17:04:02.967253    4860 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:04:02.967262    4860 status.go:255] checking status of multinode-386000 ...
	I0721 17:04:02.967576    4860 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:04:02.967582    4860 status.go:343] host is not running, skipping remaining checks
	I0721 17:04:02.967585    4860 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (80.497416ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:04:13.033647    4862 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:04:13.033845    4862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:13.033851    4862 out.go:304] Setting ErrFile to fd 2...
	I0721 17:04:13.033854    4862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:13.034029    4862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:04:13.034212    4862 out.go:298] Setting JSON to false
	I0721 17:04:13.034226    4862 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:04:13.034272    4862 notify.go:220] Checking for updates...
	I0721 17:04:13.034497    4862 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:04:13.034506    4862 status.go:255] checking status of multinode-386000 ...
	I0721 17:04:13.034822    4862 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:04:13.034827    4862 status.go:343] host is not running, skipping remaining checks
	I0721 17:04:13.034830    4862 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (71.24225ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:04:22.898285    4868 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:04:22.898527    4868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:22.898532    4868 out.go:304] Setting ErrFile to fd 2...
	I0721 17:04:22.898535    4868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:22.898738    4868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:04:22.898935    4868 out.go:298] Setting JSON to false
	I0721 17:04:22.898950    4868 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:04:22.898992    4868 notify.go:220] Checking for updates...
	I0721 17:04:22.899243    4868 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:04:22.899259    4868 status.go:255] checking status of multinode-386000 ...
	I0721 17:04:22.899557    4868 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:04:22.899562    4868 status.go:343] host is not running, skipping remaining checks
	I0721 17:04:22.899565    4868 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr: exit status 7 (72.961209ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:04:39.251238    4878 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:04:39.251455    4878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:39.251459    4878 out.go:304] Setting ErrFile to fd 2...
	I0721 17:04:39.251463    4878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:39.251652    4878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:04:39.251827    4878 out.go:298] Setting JSON to false
	I0721 17:04:39.251839    4878 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:04:39.251883    4878 notify.go:220] Checking for updates...
	I0721 17:04:39.252124    4878 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:04:39.252133    4878 status.go:255] checking status of multinode-386000 ...
	I0721 17:04:39.252440    4878 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:04:39.252446    4878 status.go:343] host is not running, skipping remaining checks
	I0721 17:04:39.252449    4878 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-386000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (33.960042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-386000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-386000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-386000: (2.832955958s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.226172375s)

                                                
                                                
-- stdout --
	* [multinode-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-386000" primary control-plane node in "multinode-386000" cluster
	* Restarting existing qemu2 VM for "multinode-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:04:42.219326    4902 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:04:42.219501    4902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:42.219505    4902 out.go:304] Setting ErrFile to fd 2...
	I0721 17:04:42.219508    4902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:42.219677    4902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:04:42.220976    4902 out.go:298] Setting JSON to false
	I0721 17:04:42.240992    4902 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3845,"bootTime":1721602837,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:04:42.241060    4902 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:04:42.246088    4902 out.go:177] * [multinode-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:04:42.252006    4902 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:04:42.252079    4902 notify.go:220] Checking for updates...
	I0721 17:04:42.257863    4902 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:04:42.260887    4902 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:04:42.263982    4902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:04:42.266888    4902 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:04:42.269981    4902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:04:42.273274    4902 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:04:42.273334    4902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:04:42.276902    4902 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:04:42.283965    4902 start.go:297] selected driver: qemu2
	I0721 17:04:42.283972    4902 start.go:901] validating driver "qemu2" against &{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:04:42.284040    4902 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:04:42.286541    4902 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:04:42.286595    4902 cni.go:84] Creating CNI manager for ""
	I0721 17:04:42.286606    4902 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0721 17:04:42.286652    4902 start.go:340] cluster config:
	{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-386000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:04:42.290654    4902 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:04:42.297908    4902 out.go:177] * Starting "multinode-386000" primary control-plane node in "multinode-386000" cluster
	I0721 17:04:42.301938    4902 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:04:42.301955    4902 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:04:42.301967    4902 cache.go:56] Caching tarball of preloaded images
	I0721 17:04:42.302031    4902 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:04:42.302037    4902 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:04:42.302094    4902 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/multinode-386000/config.json ...
	I0721 17:04:42.302508    4902 start.go:360] acquireMachinesLock for multinode-386000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:04:42.302543    4902 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "multinode-386000"
	I0721 17:04:42.302551    4902 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:04:42.302556    4902 fix.go:54] fixHost starting: 
	I0721 17:04:42.302677    4902 fix.go:112] recreateIfNeeded on multinode-386000: state=Stopped err=<nil>
	W0721 17:04:42.302685    4902 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:04:42.310929    4902 out.go:177] * Restarting existing qemu2 VM for "multinode-386000" ...
	I0721 17:04:42.314916    4902 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:04:42.314953    4902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6a:56:b1:61:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:04:42.317094    4902 main.go:141] libmachine: STDOUT: 
	I0721 17:04:42.317115    4902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:04:42.317144    4902 fix.go:56] duration metric: took 14.588208ms for fixHost
	I0721 17:04:42.317148    4902 start.go:83] releasing machines lock for "multinode-386000", held for 14.601875ms
	W0721 17:04:42.317156    4902 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:04:42.317182    4902 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:04:42.317187    4902 start.go:729] Will try again in 5 seconds ...
	I0721 17:04:47.319259    4902 start.go:360] acquireMachinesLock for multinode-386000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:04:47.319781    4902 start.go:364] duration metric: took 390.917µs to acquireMachinesLock for "multinode-386000"
	I0721 17:04:47.319920    4902 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:04:47.319943    4902 fix.go:54] fixHost starting: 
	I0721 17:04:47.320624    4902 fix.go:112] recreateIfNeeded on multinode-386000: state=Stopped err=<nil>
	W0721 17:04:47.320651    4902 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:04:47.325355    4902 out.go:177] * Restarting existing qemu2 VM for "multinode-386000" ...
	I0721 17:04:47.332278    4902 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:04:47.332544    4902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6a:56:b1:61:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:04:47.342025    4902 main.go:141] libmachine: STDOUT: 
	I0721 17:04:47.342086    4902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:04:47.342180    4902 fix.go:56] duration metric: took 22.238792ms for fixHost
	I0721 17:04:47.342197    4902 start.go:83] releasing machines lock for "multinode-386000", held for 22.391708ms
	W0721 17:04:47.342412    4902 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:04:47.350267    4902 out.go:177] 
	W0721 17:04:47.354329    4902 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:04:47.354354    4902 out.go:239] * 
	* 
	W0721 17:04:47.357095    4902 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:04:47.363211    4902 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-386000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-386000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (33.02ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 node delete m03: exit status 83 (38.409833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-386000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-386000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-386000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr: exit status 7 (28.792708ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:04:47.544255    4921 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:04:47.544415    4921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:47.544424    4921 out.go:304] Setting ErrFile to fd 2...
	I0721 17:04:47.544426    4921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:47.544553    4921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:04:47.544672    4921 out.go:298] Setting JSON to false
	I0721 17:04:47.544682    4921 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:04:47.544736    4921 notify.go:220] Checking for updates...
	I0721 17:04:47.544874    4921 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:04:47.544880    4921 status.go:255] checking status of multinode-386000 ...
	I0721 17:04:47.545083    4921 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:04:47.545087    4921 status.go:343] host is not running, skipping remaining checks
	I0721 17:04:47.545089    4921 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (29.647083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-386000 stop: (3.516488209s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status: exit status 7 (61.484167ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr: exit status 7 (31.859084ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:04:51.184180    4945 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:04:51.184326    4945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:51.184329    4945 out.go:304] Setting ErrFile to fd 2...
	I0721 17:04:51.184331    4945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:51.184461    4945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:04:51.184584    4945 out.go:298] Setting JSON to false
	I0721 17:04:51.184595    4945 mustload.go:65] Loading cluster: multinode-386000
	I0721 17:04:51.184660    4945 notify.go:220] Checking for updates...
	I0721 17:04:51.184792    4945 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:04:51.184798    4945 status.go:255] checking status of multinode-386000 ...
	I0721 17:04:51.185005    4945 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0721 17:04:51.185008    4945 status.go:343] host is not running, skipping remaining checks
	I0721 17:04:51.185011    4945 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr": multinode-386000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr": multinode-386000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (29.351833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183656209s)

                                                
                                                
-- stdout --
	* [multinode-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-386000" primary control-plane node in "multinode-386000" cluster
	* Restarting existing qemu2 VM for "multinode-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:04:51.242458    4949 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:04:51.242584    4949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:51.242587    4949 out.go:304] Setting ErrFile to fd 2...
	I0721 17:04:51.242590    4949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:04:51.242705    4949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:04:51.243702    4949 out.go:298] Setting JSON to false
	I0721 17:04:51.259517    4949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3854,"bootTime":1721602837,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:04:51.259578    4949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:04:51.264925    4949 out.go:177] * [multinode-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:04:51.271802    4949 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:04:51.271841    4949 notify.go:220] Checking for updates...
	I0721 17:04:51.278839    4949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:04:51.281834    4949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:04:51.284855    4949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:04:51.287849    4949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:04:51.290869    4949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:04:51.294059    4949 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:04:51.294322    4949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:04:51.298841    4949 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:04:51.305770    4949 start.go:297] selected driver: qemu2
	I0721 17:04:51.305776    4949 start.go:901] validating driver "qemu2" against &{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:04:51.305825    4949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:04:51.308009    4949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:04:51.308047    4949 cni.go:84] Creating CNI manager for ""
	I0721 17:04:51.308051    4949 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0721 17:04:51.308094    4949 start.go:340] cluster config:
	{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-386000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:04:51.311701    4949 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:04:51.318824    4949 out.go:177] * Starting "multinode-386000" primary control-plane node in "multinode-386000" cluster
	I0721 17:04:51.322851    4949 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:04:51.322868    4949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:04:51.322877    4949 cache.go:56] Caching tarball of preloaded images
	I0721 17:04:51.322931    4949 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:04:51.322937    4949 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:04:51.323003    4949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/multinode-386000/config.json ...
	I0721 17:04:51.323440    4949 start.go:360] acquireMachinesLock for multinode-386000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:04:51.323476    4949 start.go:364] duration metric: took 26.667µs to acquireMachinesLock for "multinode-386000"
	I0721 17:04:51.323485    4949 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:04:51.323491    4949 fix.go:54] fixHost starting: 
	I0721 17:04:51.323614    4949 fix.go:112] recreateIfNeeded on multinode-386000: state=Stopped err=<nil>
	W0721 17:04:51.323623    4949 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:04:51.331830    4949 out.go:177] * Restarting existing qemu2 VM for "multinode-386000" ...
	I0721 17:04:51.334740    4949 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:04:51.334785    4949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6a:56:b1:61:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:04:51.336862    4949 main.go:141] libmachine: STDOUT: 
	I0721 17:04:51.336882    4949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:04:51.336913    4949 fix.go:56] duration metric: took 13.422ms for fixHost
	I0721 17:04:51.336917    4949 start.go:83] releasing machines lock for "multinode-386000", held for 13.43675ms
	W0721 17:04:51.336925    4949 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:04:51.336963    4949 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:04:51.336968    4949 start.go:729] Will try again in 5 seconds ...
	I0721 17:04:56.338986    4949 start.go:360] acquireMachinesLock for multinode-386000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:04:56.339500    4949 start.go:364] duration metric: took 379.75µs to acquireMachinesLock for "multinode-386000"
	I0721 17:04:56.339636    4949 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:04:56.339657    4949 fix.go:54] fixHost starting: 
	I0721 17:04:56.340396    4949 fix.go:112] recreateIfNeeded on multinode-386000: state=Stopped err=<nil>
	W0721 17:04:56.340429    4949 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:04:56.347694    4949 out.go:177] * Restarting existing qemu2 VM for "multinode-386000" ...
	I0721 17:04:56.351794    4949 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:04:56.352027    4949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6a:56:b1:61:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/multinode-386000/disk.qcow2
	I0721 17:04:56.361231    4949 main.go:141] libmachine: STDOUT: 
	I0721 17:04:56.361287    4949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:04:56.361363    4949 fix.go:56] duration metric: took 21.70775ms for fixHost
	I0721 17:04:56.361384    4949 start.go:83] releasing machines lock for "multinode-386000", held for 21.855875ms
	W0721 17:04:56.361588    4949 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:04:56.371792    4949 out.go:177] 
	W0721 17:04:56.375881    4949 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:04:56.375910    4949 out.go:239] * 
	* 
	W0721 17:04:56.378639    4949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:04:56.385858    4949 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (67.682917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-386000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000-m01 --driver=qemu2 : exit status 80 (9.86365475s)

                                                
                                                
-- stdout --
	* [multinode-386000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-386000-m01" primary control-plane node in "multinode-386000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-386000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000-m02 --driver=qemu2 : exit status 80 (10.045214709s)

                                                
                                                
-- stdout --
	* [multinode-386000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-386000-m02" primary control-plane node in "multinode-386000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-386000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-386000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-386000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-386000: exit status 83 (79.837833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-386000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-386000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-386000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (29.719834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (9.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-494000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0721 17:05:19.007923    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-494000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.836908125s)

                                                
                                                
-- stdout --
	* [test-preload-494000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-494000" primary control-plane node in "test-preload-494000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-494000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:05:16.734969    5014 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:05:16.735107    5014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:05:16.735110    5014 out.go:304] Setting ErrFile to fd 2...
	I0721 17:05:16.735112    5014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:05:16.735245    5014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:05:16.736329    5014 out.go:298] Setting JSON to false
	I0721 17:05:16.752146    5014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3879,"bootTime":1721602837,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:05:16.752220    5014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:05:16.758581    5014 out.go:177] * [test-preload-494000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:05:16.765425    5014 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:05:16.765444    5014 notify.go:220] Checking for updates...
	I0721 17:05:16.772590    5014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:05:16.775526    5014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:05:16.778583    5014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:05:16.781585    5014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:05:16.782983    5014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:05:16.785888    5014 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:05:16.785946    5014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:05:16.789539    5014 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:05:16.794513    5014 start.go:297] selected driver: qemu2
	I0721 17:05:16.794519    5014 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:05:16.794524    5014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:05:16.796766    5014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:05:16.799511    5014 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:05:16.802672    5014 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:05:16.802712    5014 cni.go:84] Creating CNI manager for ""
	I0721 17:05:16.802721    5014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:05:16.802725    5014 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:05:16.802773    5014 start.go:340] cluster config:
	{Name:test-preload-494000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:05:16.806473    5014 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.814559    5014 out.go:177] * Starting "test-preload-494000" primary control-plane node in "test-preload-494000" cluster
	I0721 17:05:16.818528    5014 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0721 17:05:16.818609    5014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/test-preload-494000/config.json ...
	I0721 17:05:16.818633    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/test-preload-494000/config.json: {Name:mk4bc9e091e97a37a87412f2c839ce6b4fe2c629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:05:16.818631    5014 cache.go:107] acquiring lock: {Name:mk37c8ba2807fe70c5379c9eb853c648079a10a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.818634    5014 cache.go:107] acquiring lock: {Name:mk23e1a5adc8052546ad5ee221d04394b7657d40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.818639    5014 cache.go:107] acquiring lock: {Name:mkf29bc5507b4d8414c62e8028454201faf80e07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.818670    5014 cache.go:107] acquiring lock: {Name:mkd0868375d09597dc47adc09d4911244b7d10b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.818855    5014 cache.go:107] acquiring lock: {Name:mk6443694aca917372aa232118a43ec8243d3c8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.818873    5014 cache.go:107] acquiring lock: {Name:mk2a23c4df6c61d389618f3b3700b78b28f2a7fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.818866    5014 cache.go:107] acquiring lock: {Name:mk4853708eef9b8474dd9ff348875ea7651587e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.818887    5014 start.go:360] acquireMachinesLock for test-preload-494000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:05:16.818970    5014 start.go:364] duration metric: took 73.5µs to acquireMachinesLock for "test-preload-494000"
	I0721 17:05:16.818974    5014 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0721 17:05:16.818965    5014 cache.go:107] acquiring lock: {Name:mkf7cbe3723a2796b854b8df317da4047168ef02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:05:16.819034    5014 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0721 17:05:16.818984    5014 start.go:93] Provisioning new machine with config: &{Name:test-preload-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:05:16.819043    5014 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0721 17:05:16.819065    5014 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:05:16.819046    5014 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:05:16.818983    5014 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0721 17:05:16.819239    5014 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0721 17:05:16.819245    5014 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:05:16.819583    5014 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:05:16.823345    5014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:05:16.826998    5014 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:05:16.827041    5014 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0721 17:05:16.827154    5014 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:05:16.827214    5014 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0721 17:05:16.827824    5014 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0721 17:05:16.827846    5014 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0721 17:05:16.827806    5014 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0721 17:05:16.829305    5014 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:05:16.841713    5014 start.go:159] libmachine.API.Create for "test-preload-494000" (driver="qemu2")
	I0721 17:05:16.841734    5014 client.go:168] LocalClient.Create starting
	I0721 17:05:16.841829    5014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:05:16.841859    5014 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:16.841869    5014 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:16.841905    5014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:05:16.841928    5014 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:16.841934    5014 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:16.842298    5014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:05:16.984937    5014 main.go:141] libmachine: Creating SSH key...
	I0721 17:05:17.090006    5014 main.go:141] libmachine: Creating Disk image...
	I0721 17:05:17.090026    5014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:05:17.090180    5014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2
	I0721 17:05:17.099935    5014 main.go:141] libmachine: STDOUT: 
	I0721 17:05:17.099956    5014 main.go:141] libmachine: STDERR: 
	I0721 17:05:17.100000    5014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2 +20000M
	I0721 17:05:17.108865    5014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:05:17.108907    5014 main.go:141] libmachine: STDERR: 
	I0721 17:05:17.108936    5014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2
	I0721 17:05:17.108942    5014 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:05:17.108963    5014 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:05:17.108997    5014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:d7:cd:f5:cf:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2
	I0721 17:05:17.110972    5014 main.go:141] libmachine: STDOUT: 
	I0721 17:05:17.110998    5014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:05:17.111017    5014 client.go:171] duration metric: took 269.286792ms to LocalClient.Create
	W0721 17:05:18.995518    5014 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0721 17:05:18.995657    5014 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0721 17:05:19.112131    5014 start.go:128] duration metric: took 2.293063s to createHost
	I0721 17:05:19.112182    5014 start.go:83] releasing machines lock for "test-preload-494000", held for 2.29326625s
	W0721 17:05:19.112240    5014 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:05:19.125212    5014 out.go:177] * Deleting "test-preload-494000" in qemu2 ...
	I0721 17:05:19.137773    5014 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0721 17:05:19.153439    5014 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:05:19.153466    5014 start.go:729] Will try again in 5 seconds ...
	I0721 17:05:19.175024    5014 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0721 17:05:19.175810    5014 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0721 17:05:19.747090    5014 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0721 17:05:19.776993    5014 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0721 17:05:19.827622    5014 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0721 17:05:19.873129    5014 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0721 17:05:19.873212    5014 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0721 17:05:19.917527    5014 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0721 17:05:19.917586    5014 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 3.098854167s
	I0721 17:05:19.917626    5014 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0721 17:05:20.387708    5014 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0721 17:05:20.387761    5014 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.569221167s
	I0721 17:05:20.387830    5014 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0721 17:05:20.852120    5014 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0721 17:05:20.852171    5014 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.033440084s
	I0721 17:05:20.852202    5014 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0721 17:05:22.008795    5014 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0721 17:05:22.008850    5014 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 5.190355167s
	I0721 17:05:22.008874    5014 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0721 17:05:22.947418    5014 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0721 17:05:22.947462    5014 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.129001167s
	I0721 17:05:22.947489    5014 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0721 17:05:24.053456    5014 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0721 17:05:24.053504    5014 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 7.234871709s
	I0721 17:05:24.053527    5014 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0721 17:05:24.153549    5014 start.go:360] acquireMachinesLock for test-preload-494000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:05:24.153984    5014 start.go:364] duration metric: took 375.875µs to acquireMachinesLock for "test-preload-494000"
	I0721 17:05:24.154088    5014 start.go:93] Provisioning new machine with config: &{Name:test-preload-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:05:24.154363    5014 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:05:24.159734    5014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:05:24.199844    5014 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0721 17:05:24.199960    5014 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.381491125s
	I0721 17:05:24.199979    5014 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0721 17:05:24.211492    5014 start.go:159] libmachine.API.Create for "test-preload-494000" (driver="qemu2")
	I0721 17:05:24.211527    5014 client.go:168] LocalClient.Create starting
	I0721 17:05:24.211636    5014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:05:24.211703    5014 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:24.211719    5014 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:24.211805    5014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:05:24.211851    5014 main.go:141] libmachine: Decoding PEM data...
	I0721 17:05:24.211868    5014 main.go:141] libmachine: Parsing certificate...
	I0721 17:05:24.212344    5014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:05:24.364778    5014 main.go:141] libmachine: Creating SSH key...
	I0721 17:05:24.474867    5014 main.go:141] libmachine: Creating Disk image...
	I0721 17:05:24.474876    5014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:05:24.475055    5014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2
	I0721 17:05:24.484739    5014 main.go:141] libmachine: STDOUT: 
	I0721 17:05:24.484759    5014 main.go:141] libmachine: STDERR: 
	I0721 17:05:24.484808    5014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2 +20000M
	I0721 17:05:24.492899    5014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:05:24.492912    5014 main.go:141] libmachine: STDERR: 
	I0721 17:05:24.492922    5014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2
	I0721 17:05:24.492928    5014 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:05:24.492943    5014 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:05:24.492984    5014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4d:a6:2d:64:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/test-preload-494000/disk.qcow2
	I0721 17:05:24.494754    5014 main.go:141] libmachine: STDOUT: 
	I0721 17:05:24.494773    5014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:05:24.494788    5014 client.go:171] duration metric: took 283.26225ms to LocalClient.Create
	I0721 17:05:26.494955    5014 start.go:128] duration metric: took 2.340582958s to createHost
	I0721 17:05:26.495006    5014 start.go:83] releasing machines lock for "test-preload-494000", held for 2.3410645s
	W0721 17:05:26.495237    5014 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-494000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-494000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:05:26.508776    5014 out.go:177] 
	W0721 17:05:26.511094    5014 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:05:26.511117    5014 out.go:239] * 
	* 
	W0721 17:05:26.513782    5014 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:05:26.527685    5014 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-494000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-21 17:05:26.546743 -0700 PDT m=+2497.102916793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-494000 -n test-preload-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-494000 -n test-preload-494000: exit status 7 (64.988042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-494000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-494000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-494000
--- FAIL: TestPreload (9.98s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-861000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-861000 --memory=2048 --driver=qemu2 : exit status 80 (9.830919292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-861000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-861000" primary control-plane node in "scheduled-stop-861000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-861000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-861000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-861000" primary control-plane node in "scheduled-stop-861000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-861000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-21 17:05:36.518356 -0700 PDT m=+2507.074806209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-861000 -n scheduled-stop-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-861000 -n scheduled-stop-861000: exit status 7 (67.251458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-861000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-861000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-861000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (12.29s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe614429638 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe614429638 version: (1.064178667s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-569000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-569000 --memory=2600 --driver=qemu2 : exit status 80 (9.909537958s)

                                                
                                                
-- stdout --
	* [skaffold-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-569000" primary control-plane node in "skaffold-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-569000" primary control-plane node in "skaffold-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-21 17:05:48.819628 -0700 PDT m=+2519.376418918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-569000 -n skaffold-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-569000 -n skaffold-569000: exit status 7 (61.316583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-569000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-569000
--- FAIL: TestSkaffold (12.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (600.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1746170024 start -p running-upgrade-647000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1746170024 start -p running-upgrade-647000 --memory=2200 --vm-driver=qemu2 : (55.2501825s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-647000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0721 17:08:09.302863    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-647000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m31.295460208s)

                                                
                                                
-- stdout --
	* [running-upgrade-647000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-647000" primary control-plane node in "running-upgrade-647000" cluster
	* Updating the running qemu2 "running-upgrade-647000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:07:26.727628    5424 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:07:26.727778    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:07:26.727785    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:07:26.727787    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:07:26.727918    5424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:07:26.729023    5424 out.go:298] Setting JSON to false
	I0721 17:07:26.745255    5424 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4009,"bootTime":1721602837,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:07:26.745319    5424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:07:26.749123    5424 out.go:177] * [running-upgrade-647000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:07:26.757015    5424 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:07:26.757070    5424 notify.go:220] Checking for updates...
	I0721 17:07:26.764106    5424 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:07:26.767030    5424 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:07:26.770104    5424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:07:26.773101    5424 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:07:26.775989    5424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:07:26.779322    5424 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:07:26.782016    5424 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0721 17:07:26.785101    5424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:07:26.789035    5424 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:07:26.796032    5424 start.go:297] selected driver: qemu2
	I0721 17:07:26.796039    5424 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:07:26.796105    5424 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:07:26.798383    5424 cni.go:84] Creating CNI manager for ""
	I0721 17:07:26.798402    5424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:07:26.798435    5424 start.go:340] cluster config:
	{Name:running-upgrade-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:07:26.798483    5424 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:07:26.806044    5424 out.go:177] * Starting "running-upgrade-647000" primary control-plane node in "running-upgrade-647000" cluster
	I0721 17:07:26.810020    5424 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0721 17:07:26.810034    5424 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0721 17:07:26.810041    5424 cache.go:56] Caching tarball of preloaded images
	I0721 17:07:26.810090    5424 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:07:26.810096    5424 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0721 17:07:26.810144    5424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/config.json ...
	I0721 17:07:26.810542    5424 start.go:360] acquireMachinesLock for running-upgrade-647000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:07:26.810577    5424 start.go:364] duration metric: took 29.209µs to acquireMachinesLock for "running-upgrade-647000"
	I0721 17:07:26.810584    5424 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:07:26.810590    5424 fix.go:54] fixHost starting: 
	I0721 17:07:26.811147    5424 fix.go:112] recreateIfNeeded on running-upgrade-647000: state=Running err=<nil>
	W0721 17:07:26.811157    5424 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:07:26.813203    5424 out.go:177] * Updating the running qemu2 "running-upgrade-647000" VM ...
	I0721 17:07:26.821071    5424 machine.go:94] provisionDockerMachine start ...
	I0721 17:07:26.821104    5424 main.go:141] libmachine: Using SSH client type: native
	I0721 17:07:26.821207    5424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104586a10] 0x104589270 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0721 17:07:26.821212    5424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 17:07:26.879938    5424 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-647000
	
	I0721 17:07:26.879955    5424 buildroot.go:166] provisioning hostname "running-upgrade-647000"
	I0721 17:07:26.879998    5424 main.go:141] libmachine: Using SSH client type: native
	I0721 17:07:26.880131    5424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104586a10] 0x104589270 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0721 17:07:26.880137    5424 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-647000 && echo "running-upgrade-647000" | sudo tee /etc/hostname
	I0721 17:07:26.937132    5424 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-647000
	
	I0721 17:07:26.937177    5424 main.go:141] libmachine: Using SSH client type: native
	I0721 17:07:26.937279    5424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104586a10] 0x104589270 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0721 17:07:26.937287    5424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-647000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-647000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-647000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 17:07:26.991511    5424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 17:07:26.991521    5424 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1409/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1409/.minikube}
	I0721 17:07:26.991532    5424 buildroot.go:174] setting up certificates
	I0721 17:07:26.991538    5424 provision.go:84] configureAuth start
	I0721 17:07:26.991542    5424 provision.go:143] copyHostCerts
	I0721 17:07:26.991611    5424 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem, removing ...
	I0721 17:07:26.991618    5424 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem
	I0721 17:07:26.991735    5424 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem (1078 bytes)
	I0721 17:07:26.991913    5424 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem, removing ...
	I0721 17:07:26.991916    5424 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem
	I0721 17:07:26.991968    5424 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem (1123 bytes)
	I0721 17:07:26.992096    5424 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem, removing ...
	I0721 17:07:26.992099    5424 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem
	I0721 17:07:26.992146    5424 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem (1675 bytes)
	I0721 17:07:26.992240    5424 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-647000 san=[127.0.0.1 localhost minikube running-upgrade-647000]
	I0721 17:07:27.071429    5424 provision.go:177] copyRemoteCerts
	I0721 17:07:27.071467    5424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 17:07:27.071475    5424 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0721 17:07:27.101205    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 17:07:27.108535    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0721 17:07:27.114771    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 17:07:27.121732    5424 provision.go:87] duration metric: took 130.187667ms to configureAuth
	I0721 17:07:27.121740    5424 buildroot.go:189] setting minikube options for container-runtime
	I0721 17:07:27.121858    5424 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:07:27.121891    5424 main.go:141] libmachine: Using SSH client type: native
	I0721 17:07:27.121972    5424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104586a10] 0x104589270 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0721 17:07:27.121979    5424 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 17:07:27.177560    5424 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 17:07:27.177568    5424 buildroot.go:70] root file system type: tmpfs
	I0721 17:07:27.177615    5424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 17:07:27.177674    5424 main.go:141] libmachine: Using SSH client type: native
	I0721 17:07:27.177782    5424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104586a10] 0x104589270 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0721 17:07:27.177815    5424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 17:07:27.237514    5424 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 17:07:27.237555    5424 main.go:141] libmachine: Using SSH client type: native
	I0721 17:07:27.237665    5424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104586a10] 0x104589270 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0721 17:07:27.237672    5424 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 17:07:27.295590    5424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 17:07:27.295602    5424 machine.go:97] duration metric: took 474.538667ms to provisionDockerMachine
	I0721 17:07:27.295607    5424 start.go:293] postStartSetup for "running-upgrade-647000" (driver="qemu2")
	I0721 17:07:27.295613    5424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 17:07:27.295662    5424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 17:07:27.295672    5424 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0721 17:07:27.326196    5424 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 17:07:27.327614    5424 info.go:137] Remote host: Buildroot 2021.02.12
	I0721 17:07:27.327621    5424 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1409/.minikube/addons for local assets ...
	I0721 17:07:27.327714    5424 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1409/.minikube/files for local assets ...
	I0721 17:07:27.327838    5424 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem -> 19112.pem in /etc/ssl/certs
	I0721 17:07:27.327965    5424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 17:07:27.330585    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem --> /etc/ssl/certs/19112.pem (1708 bytes)
	I0721 17:07:27.338301    5424 start.go:296] duration metric: took 42.688166ms for postStartSetup
	I0721 17:07:27.338317    5424 fix.go:56] duration metric: took 527.741584ms for fixHost
	I0721 17:07:27.338357    5424 main.go:141] libmachine: Using SSH client type: native
	I0721 17:07:27.338483    5424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104586a10] 0x104589270 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0721 17:07:27.338492    5424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0721 17:07:27.400045    5424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721606847.705866432
	
	I0721 17:07:27.400054    5424 fix.go:216] guest clock: 1721606847.705866432
	I0721 17:07:27.400058    5424 fix.go:229] Guest: 2024-07-21 17:07:27.705866432 -0700 PDT Remote: 2024-07-21 17:07:27.338318 -0700 PDT m=+0.629416417 (delta=367.548432ms)
	I0721 17:07:27.400071    5424 fix.go:200] guest clock delta is within tolerance: 367.548432ms
	I0721 17:07:27.400074    5424 start.go:83] releasing machines lock for "running-upgrade-647000", held for 589.509125ms
	I0721 17:07:27.400134    5424 ssh_runner.go:195] Run: cat /version.json
	I0721 17:07:27.400144    5424 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0721 17:07:27.400134    5424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 17:07:27.400209    5424 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	W0721 17:07:27.400695    5424 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50253: connect: connection refused
	I0721 17:07:27.400722    5424 retry.go:31] will retry after 361.733225ms: dial tcp [::1]:50253: connect: connection refused
	W0721 17:07:27.427954    5424 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0721 17:07:27.427999    5424 ssh_runner.go:195] Run: systemctl --version
	I0721 17:07:27.429854    5424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 17:07:27.431713    5424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 17:07:27.431734    5424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0721 17:07:27.434740    5424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0721 17:07:27.439447    5424 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 17:07:27.439453    5424 start.go:495] detecting cgroup driver to use...
	I0721 17:07:27.439526    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 17:07:27.444489    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0721 17:07:27.447340    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 17:07:27.450378    5424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 17:07:27.450400    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 17:07:27.455392    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 17:07:27.458388    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0721 17:07:27.461406    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 17:07:27.464146    5424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 17:07:27.467141    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 17:07:27.470026    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 17:07:27.473210    5424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 17:07:27.476048    5424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 17:07:27.478597    5424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 17:07:27.481616    5424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:07:27.554272    5424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 17:07:27.561064    5424 start.go:495] detecting cgroup driver to use...
	I0721 17:07:27.561128    5424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 17:07:27.570128    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 17:07:27.574895    5424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 17:07:27.581175    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 17:07:27.585606    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 17:07:27.589885    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 17:07:27.595399    5424 ssh_runner.go:195] Run: which cri-dockerd
	I0721 17:07:27.596635    5424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 17:07:27.598993    5424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 17:07:27.603664    5424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 17:07:27.679457    5424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 17:07:27.753234    5424 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 17:07:27.753293    5424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 17:07:27.758295    5424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:07:27.832164    5424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 17:07:29.612240    5424 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.212100959s)
	I0721 17:07:29.612860    5424 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.780733834s)
	I0721 17:07:29.612906    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0721 17:07:29.617490    5424 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0721 17:07:29.623417    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0721 17:07:29.628300    5424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0721 17:07:29.698250    5424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0721 17:07:29.761488    5424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:07:29.837235    5424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0721 17:07:29.843583    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0721 17:07:29.847973    5424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:07:29.911042    5424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0721 17:07:29.949404    5424 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0721 17:07:29.949469    5424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0721 17:07:29.951481    5424 start.go:563] Will wait 60s for crictl version
	I0721 17:07:29.951518    5424 ssh_runner.go:195] Run: which crictl
	I0721 17:07:29.953114    5424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 17:07:29.964972    5424 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0721 17:07:29.965034    5424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0721 17:07:29.976789    5424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0721 17:07:29.997755    5424 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0721 17:07:29.997819    5424 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0721 17:07:29.999261    5424 kubeadm.go:883] updating cluster {Name:running-upgrade-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0721 17:07:29.999308    5424 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0721 17:07:29.999349    5424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0721 17:07:30.015736    5424 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0721 17:07:30.015745    5424 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0721 17:07:30.015813    5424 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0721 17:07:30.018811    5424 ssh_runner.go:195] Run: which lz4
	I0721 17:07:30.020135    5424 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0721 17:07:30.021345    5424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0721 17:07:30.021355    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0721 17:07:30.901981    5424 docker.go:649] duration metric: took 881.895167ms to copy over tarball
	I0721 17:07:30.902041    5424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0721 17:07:32.253745    5424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.351726083s)
	I0721 17:07:32.253760    5424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0721 17:07:32.269952    5424 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0721 17:07:32.273304    5424 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0721 17:07:32.278265    5424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:07:32.342937    5424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 17:07:33.700815    5424 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.357898834s)
	I0721 17:07:33.700899    5424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0721 17:07:33.725375    5424 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0721 17:07:33.725383    5424 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0721 17:07:33.725388    5424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0721 17:07:33.731092    5424 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:07:33.733194    5424 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:07:33.735502    5424 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:07:33.735544    5424 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:07:33.737761    5424 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:07:33.737853    5424 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:07:33.739309    5424 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:07:33.739364    5424 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:07:33.741154    5424 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:07:33.741186    5424 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:07:33.743036    5424 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:07:33.743063    5424 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0721 17:07:33.744515    5424 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:07:33.744632    5424 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:07:33.746835    5424 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0721 17:07:33.747888    5424 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:07:36.092675    5424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0721 17:07:36.098972    5424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:07:36.149362    5424 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0721 17:07:36.149371    5424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0721 17:07:36.149408    5424 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:07:36.149408    5424 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:07:36.149504    5424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0721 17:07:36.149512    5424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:07:36.167973    5424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0721 17:07:36.170597    5424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0721 17:07:36.170601    5424 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0721 17:07:36.170728    5424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:07:36.184757    5424 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0721 17:07:36.184777    5424 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:07:36.184826    5424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:07:36.195425    5424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0721 17:07:36.195541    5424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0721 17:07:36.197092    5424 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0721 17:07:36.197104    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0721 17:07:36.239052    5424 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0721 17:07:36.239066    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0721 17:07:36.277389    5424 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0721 17:07:36.343705    5424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:07:36.356092    5424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0721 17:07:36.356114    5424 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:07:36.356172    5424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:07:36.367537    5424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0721 17:07:36.630435    5424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:07:36.647996    5424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:07:36.677124    5424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0721 17:07:36.677165    5424 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:07:36.677278    5424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:07:36.684077    5424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0721 17:07:36.685912    5424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0721 17:07:36.685948    5424 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:07:36.685995    5424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:07:36.710604    5424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0721 17:07:36.711715    5424 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0721 17:07:36.711736    5424 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0721 17:07:36.711796    5424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0721 17:07:36.721708    5424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0721 17:07:36.725251    5424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0721 17:07:36.725358    5424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0721 17:07:36.726810    5424 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0721 17:07:36.726822    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0721 17:07:36.734360    5424 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0721 17:07:36.734370    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0721 17:07:36.760133    5424 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0721 17:07:36.798067    5424 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0721 17:07:36.798167    5424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:07:36.810567    5424 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0721 17:07:36.810592    5424 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:07:36.810651    5424 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:07:37.365491    5424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0721 17:07:37.365817    5424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0721 17:07:37.370406    5424 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0721 17:07:37.370437    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0721 17:07:37.431841    5424 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0721 17:07:37.431857    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0721 17:07:37.664862    5424 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0721 17:07:37.664899    5424 cache_images.go:92] duration metric: took 3.939613041s to LoadCachedImages
	W0721 17:07:37.664945    5424 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0721 17:07:37.664952    5424 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0721 17:07:37.664999    5424 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-647000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 17:07:37.665058    5424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0721 17:07:37.678723    5424 cni.go:84] Creating CNI manager for ""
	I0721 17:07:37.678733    5424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:07:37.678738    5424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 17:07:37.678746    5424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-647000 NodeName:running-upgrade-647000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 17:07:37.678808    5424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-647000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 17:07:37.678873    5424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0721 17:07:37.681976    5424 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 17:07:37.682007    5424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0721 17:07:37.685178    5424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0721 17:07:37.690354    5424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 17:07:37.695509    5424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0721 17:07:37.700711    5424 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0721 17:07:37.702226    5424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:07:37.765185    5424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 17:07:37.770113    5424 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000 for IP: 10.0.2.15
	I0721 17:07:37.770119    5424 certs.go:194] generating shared ca certs ...
	I0721 17:07:37.770127    5424 certs.go:226] acquiring lock for ca certs: {Name:mke4827a2590eed55d39c612acfba4d65d3007ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:07:37.770283    5424 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.key
	I0721 17:07:37.770332    5424 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.key
	I0721 17:07:37.770337    5424 certs.go:256] generating profile certs ...
	I0721 17:07:37.770414    5424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/client.key
	I0721 17:07:37.770430    5424 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.key.9d1fd0b5
	I0721 17:07:37.770441    5424 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.crt.9d1fd0b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0721 17:07:37.858499    5424 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.crt.9d1fd0b5 ...
	I0721 17:07:37.858506    5424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.crt.9d1fd0b5: {Name:mke5facea4908701ac3fb83236f2ecfac3386c56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:07:37.858755    5424 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.key.9d1fd0b5 ...
	I0721 17:07:37.858759    5424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.key.9d1fd0b5: {Name:mk22f2a899f27af74a1c59df0b4944907068c3d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:07:37.858881    5424 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.crt.9d1fd0b5 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.crt
	I0721 17:07:37.859060    5424 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.key.9d1fd0b5 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.key
	I0721 17:07:37.860240    5424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/proxy-client.key
	I0721 17:07:37.860380    5424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911.pem (1338 bytes)
	W0721 17:07:37.860409    5424 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911_empty.pem, impossibly tiny 0 bytes
	I0721 17:07:37.860419    5424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 17:07:37.860439    5424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem (1078 bytes)
	I0721 17:07:37.860462    5424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem (1123 bytes)
	I0721 17:07:37.860479    5424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem (1675 bytes)
	I0721 17:07:37.860518    5424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem (1708 bytes)
	I0721 17:07:37.860821    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 17:07:37.867985    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 17:07:37.875462    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 17:07:37.883081    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 17:07:37.890581    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0721 17:07:37.897433    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 17:07:37.904007    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 17:07:37.911543    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 17:07:37.919310    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem --> /usr/share/ca-certificates/19112.pem (1708 bytes)
	I0721 17:07:37.926663    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 17:07:37.933509    5424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911.pem --> /usr/share/ca-certificates/1911.pem (1338 bytes)
	I0721 17:07:37.940265    5424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 17:07:37.945386    5424 ssh_runner.go:195] Run: openssl version
	I0721 17:07:37.947245    5424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1911.pem && ln -fs /usr/share/ca-certificates/1911.pem /etc/ssl/certs/1911.pem"
	I0721 17:07:37.950422    5424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1911.pem
	I0721 17:07:37.951897    5424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:32 /usr/share/ca-certificates/1911.pem
	I0721 17:07:37.951918    5424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1911.pem
	I0721 17:07:37.953931    5424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1911.pem /etc/ssl/certs/51391683.0"
	I0721 17:07:37.956572    5424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19112.pem && ln -fs /usr/share/ca-certificates/19112.pem /etc/ssl/certs/19112.pem"
	I0721 17:07:37.960081    5424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19112.pem
	I0721 17:07:37.961720    5424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:32 /usr/share/ca-certificates/19112.pem
	I0721 17:07:37.961742    5424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19112.pem
	I0721 17:07:37.963526    5424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19112.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 17:07:37.966234    5424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 17:07:37.969067    5424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:07:37.970774    5424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:24 /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:07:37.970792    5424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:07:37.972534    5424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 17:07:37.975610    5424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 17:07:37.977132    5424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0721 17:07:37.979032    5424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0721 17:07:37.980749    5424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0721 17:07:37.982720    5424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0721 17:07:37.984653    5424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0721 17:07:37.986446    5424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0721 17:07:37.988309    5424 kubeadm.go:392] StartCluster: {Name:running-upgrade-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:07:37.988370    5424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0721 17:07:37.998438    5424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0721 17:07:38.001590    5424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0721 17:07:38.001595    5424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0721 17:07:38.001615    5424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0721 17:07:38.004686    5424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0721 17:07:38.004940    5424 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-647000" does not appear in /Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:07:38.004995    5424 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1409/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-647000" cluster setting kubeconfig missing "running-upgrade-647000" context setting]
	I0721 17:07:38.005130    5424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/kubeconfig: {Name:mk941eb06ccb0e2f7fcbae3a7de63e740b813743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:07:38.005787    5424 kapi.go:59] client config for running-upgrade-647000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10591b790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 17:07:38.006116    5424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0721 17:07:38.008999    5424 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-647000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0721 17:07:38.009009    5424 kubeadm.go:1160] stopping kube-system containers ...
	I0721 17:07:38.009045    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0721 17:07:38.023638    5424 docker.go:483] Stopping containers: [7661d1ef609c a99b5c20f8f4 1444895ef33e 17d3c5086bf8 04cfba4b0b9b 5849c09391d1 eca19629fad3 89bcdb17bb36 e243b7ecf176 de94b8fa24b7 6bf8776553e3 842bb19739cb]
	I0721 17:07:38.023703    5424 ssh_runner.go:195] Run: docker stop 7661d1ef609c a99b5c20f8f4 1444895ef33e 17d3c5086bf8 04cfba4b0b9b 5849c09391d1 eca19629fad3 89bcdb17bb36 e243b7ecf176 de94b8fa24b7 6bf8776553e3 842bb19739cb
	I0721 17:07:38.034453    5424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0721 17:07:38.141967    5424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 17:07:38.146669    5424 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 22 00:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 22 00:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 22 00:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 22 00:07 /etc/kubernetes/scheduler.conf
	
	I0721 17:07:38.146706    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0721 17:07:38.150443    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0721 17:07:38.150477    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 17:07:38.154356    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0721 17:07:38.157920    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0721 17:07:38.157946    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 17:07:38.161518    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0721 17:07:38.164618    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0721 17:07:38.164648    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 17:07:38.167389    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0721 17:07:38.170104    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0721 17:07:38.170126    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 17:07:38.173008    5424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 17:07:38.175885    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:07:38.197424    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:07:38.567776    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:07:38.759537    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:07:38.787223    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:07:38.810832    5424 api_server.go:52] waiting for apiserver process to appear ...
	I0721 17:07:38.810916    5424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:07:39.312949    5424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:07:39.812959    5424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:07:40.312961    5424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:07:40.317388    5424 api_server.go:72] duration metric: took 1.506600291s to wait for apiserver process to appear ...
	I0721 17:07:40.317398    5424 api_server.go:88] waiting for apiserver healthz status ...
	I0721 17:07:40.317408    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:07:45.319529    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:07:45.319619    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:07:50.320599    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:07:50.320644    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:07:55.321311    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:07:55.321341    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:00.322027    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:00.322100    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:05.323395    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:05.323518    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:10.325261    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:10.325366    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:15.327536    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:15.327563    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:20.329746    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:20.329825    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:25.332352    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:25.332421    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:30.334915    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:30.334994    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:35.336878    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:35.336924    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:40.339199    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:40.339632    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:08:40.381000    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:08:40.381150    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:08:40.402137    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:08:40.402233    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:08:40.426143    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:08:40.426226    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:08:40.437042    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:08:40.437128    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:08:40.447205    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:08:40.447278    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:08:40.458122    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:08:40.458188    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:08:40.468541    5424 logs.go:276] 0 containers: []
	W0721 17:08:40.468553    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:08:40.468611    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:08:40.478835    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:08:40.478850    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:08:40.478856    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:08:40.492065    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:08:40.492078    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:08:40.512143    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:08:40.512156    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:08:40.526521    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:08:40.526532    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:08:40.541596    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:08:40.541608    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:08:40.566504    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:08:40.566514    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:08:40.644013    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:08:40.644025    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:08:40.656323    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:08:40.656333    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:08:40.673577    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:08:40.673587    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:08:40.690853    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:08:40.690863    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:08:40.695793    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:08:40.695803    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:08:40.710573    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:08:40.710583    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:08:40.722117    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:08:40.722129    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:08:40.738432    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:08:40.738443    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:08:40.775094    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:08:40.775190    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:08:40.776218    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:08:40.776222    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:08:40.788008    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:08:40.788020    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:08:40.805167    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:08:40.805179    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:08:40.816443    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:08:40.816452    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:08:40.816479    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:08:40.816483    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:08:40.816487    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:08:40.816491    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:08:40.816494    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:08:50.817850    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:08:55.818669    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:08:55.819143    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:08:55.859666    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:08:55.859797    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:08:55.881254    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:08:55.881372    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:08:55.895802    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:08:55.895883    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:08:55.908207    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:08:55.908280    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:08:55.919429    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:08:55.919518    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:08:55.930159    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:08:55.930231    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:08:55.940500    5424 logs.go:276] 0 containers: []
	W0721 17:08:55.940514    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:08:55.940568    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:08:55.951368    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:08:55.951386    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:08:55.951391    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:08:55.956375    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:08:55.956382    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:08:55.971858    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:08:55.971870    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:08:56.011259    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:08:56.011352    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:08:56.012348    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:08:56.012354    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:08:56.024473    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:08:56.024485    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:08:56.049132    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:08:56.049138    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:08:56.061732    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:08:56.061745    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:08:56.088817    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:08:56.088827    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:08:56.106940    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:08:56.106952    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:08:56.118293    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:08:56.118306    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:08:56.135976    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:08:56.135990    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:08:56.147320    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:08:56.147335    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:08:56.184054    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:08:56.184066    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:08:56.199294    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:08:56.199303    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:08:56.215317    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:08:56.215328    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:08:56.226554    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:08:56.226576    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:08:56.238966    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:08:56.238977    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:08:56.253538    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:08:56.253549    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:08:56.253573    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:08:56.253578    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:08:56.253581    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:08:56.253585    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:08:56.253587    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:09:06.257572    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:09:11.260294    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:09:11.260510    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:09:11.274926    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:09:11.275010    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:09:11.286342    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:09:11.286412    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:09:11.296737    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:09:11.296799    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:09:11.307170    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:09:11.307247    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:09:11.317420    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:09:11.317484    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:09:11.327577    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:09:11.327643    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:09:11.337883    5424 logs.go:276] 0 containers: []
	W0721 17:09:11.337897    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:09:11.337963    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:09:11.352455    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:09:11.352474    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:09:11.352479    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:09:11.392015    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:09:11.392027    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:09:11.403561    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:09:11.403577    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:09:11.415120    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:09:11.415129    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:09:11.430942    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:09:11.430955    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:09:11.445954    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:09:11.445964    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:09:11.457566    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:09:11.457576    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:09:11.461853    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:09:11.461862    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:09:11.474912    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:09:11.474922    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:09:11.489956    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:09:11.489968    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:09:11.501787    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:09:11.501799    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:09:11.526328    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:09:11.526338    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:09:11.563094    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:09:11.563189    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:09:11.564216    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:09:11.564222    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:09:11.577849    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:09:11.577860    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:09:11.596777    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:09:11.596788    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:09:11.608153    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:09:11.608163    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:09:11.625005    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:09:11.625014    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:09:11.637688    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:11.637698    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:09:11.637726    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:09:11.637730    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:09:11.637735    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:09:11.637738    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:11.637741    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:09:21.639765    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:09:26.642546    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:09:26.643007    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:09:26.680958    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:09:26.681097    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:09:26.703526    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:09:26.703645    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:09:26.719261    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:09:26.719338    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:09:26.735679    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:09:26.735753    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:09:26.746681    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:09:26.746742    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:09:26.757278    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:09:26.757344    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:09:26.767325    5424 logs.go:276] 0 containers: []
	W0721 17:09:26.767338    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:09:26.767392    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:09:26.779958    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:09:26.779976    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:09:26.779982    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:09:26.794113    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:09:26.794124    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:09:26.813617    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:09:26.813630    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:09:26.828607    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:09:26.828617    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:09:26.840527    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:09:26.840541    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:09:26.852362    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:09:26.852372    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:09:26.856882    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:09:26.856890    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:09:26.868343    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:09:26.868356    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:09:26.886421    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:09:26.886433    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:09:26.901698    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:09:26.901711    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:09:26.914537    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:09:26.914551    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:09:26.940321    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:09:26.940333    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:09:26.979179    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:09:26.979272    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:09:26.980264    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:09:26.980268    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:09:27.016909    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:09:27.016924    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:09:27.030523    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:09:27.030533    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:09:27.041761    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:09:27.041772    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:09:27.057147    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:09:27.057158    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:09:27.068881    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:27.068890    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:09:27.068918    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:09:27.068925    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:09:27.068930    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:09:27.068934    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:27.068939    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:09:37.072966    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:09:42.075733    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:09:42.076032    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:09:42.114903    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:09:42.115027    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:09:42.136320    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:09:42.136404    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:09:42.151998    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:09:42.152065    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:09:42.164890    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:09:42.164959    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:09:42.175830    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:09:42.175894    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:09:42.187266    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:09:42.187348    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:09:42.201791    5424 logs.go:276] 0 containers: []
	W0721 17:09:42.201802    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:09:42.201854    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:09:42.212467    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:09:42.212485    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:09:42.212490    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:09:42.223742    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:09:42.223754    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:09:42.238958    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:09:42.238966    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:09:42.250055    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:09:42.250065    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:09:42.254367    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:09:42.254377    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:09:42.267499    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:09:42.267509    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:09:42.281720    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:09:42.281729    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:09:42.293310    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:09:42.293323    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:09:42.306909    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:09:42.306920    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:09:42.347453    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:09:42.347547    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:09:42.348608    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:09:42.348613    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:09:42.384130    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:09:42.384144    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:09:42.398465    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:09:42.398475    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:09:42.413660    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:09:42.413669    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:09:42.438334    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:09:42.438341    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:09:42.456856    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:09:42.456867    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:09:42.475663    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:09:42.475673    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:09:42.494544    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:09:42.494556    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:09:42.509654    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:42.509664    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:09:42.509694    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:09:42.509698    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:09:42.509702    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:09:42.509705    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:42.509708    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:09:52.512483    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:09:57.514748    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:09:57.514931    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:09:57.527538    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:09:57.527616    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:09:57.538638    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:09:57.538716    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:09:57.554395    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:09:57.554462    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:09:57.567794    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:09:57.567857    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:09:57.582811    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:09:57.582881    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:09:57.593889    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:09:57.593954    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:09:57.604698    5424 logs.go:276] 0 containers: []
	W0721 17:09:57.604709    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:09:57.604767    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:09:57.624084    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:09:57.624100    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:09:57.624106    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:09:57.638316    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:09:57.638326    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:09:57.652779    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:09:57.652790    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:09:57.664656    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:09:57.664666    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:09:57.677356    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:09:57.677372    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:09:57.701103    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:09:57.701111    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:09:57.716010    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:09:57.716021    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:09:57.728079    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:09:57.728090    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:09:57.740093    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:09:57.740103    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:09:57.755739    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:09:57.755753    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:09:57.773401    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:09:57.773411    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:09:57.785199    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:09:57.785213    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:09:57.824453    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:09:57.824546    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:09:57.825598    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:09:57.825602    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:09:57.829832    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:09:57.829838    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:09:57.864794    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:09:57.864805    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:09:57.884149    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:09:57.884158    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:09:57.898029    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:09:57.898041    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:09:57.909865    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:57.909877    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:09:57.909906    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:09:57.909910    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:09:57.909915    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:09:57.909921    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:57.909923    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:10:07.913829    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:10:12.915674    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:10:12.915799    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:10:12.931186    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:10:12.931257    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:10:12.945583    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:10:12.945665    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:10:12.959113    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:10:12.959181    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:10:12.970966    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:10:12.971040    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:10:12.983030    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:10:12.983107    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:10:12.995487    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:10:12.995559    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:10:13.009468    5424 logs.go:276] 0 containers: []
	W0721 17:10:13.009480    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:10:13.009541    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:10:13.021818    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:10:13.021841    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:10:13.021847    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:10:13.043582    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:10:13.043597    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:10:13.061423    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:10:13.061443    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:10:13.076372    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:10:13.076385    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:10:13.089400    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:10:13.089412    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:10:13.094302    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:10:13.094314    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:10:13.135165    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:10:13.135179    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:10:13.153319    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:10:13.153331    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:10:13.194825    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:13.194928    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:13.195993    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:10:13.196000    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:10:13.213481    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:10:13.213500    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:10:13.230838    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:10:13.230856    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:10:13.246237    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:10:13.246252    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:10:13.276385    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:10:13.276403    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:10:13.289525    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:10:13.289538    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:10:13.305582    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:10:13.305597    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:10:13.322660    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:10:13.322674    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:10:13.340952    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:10:13.340964    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:10:13.363935    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:13.363952    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:10:13.363987    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:10:13.363991    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:13.363996    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:13.364006    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:13.364008    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:10:23.366848    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:10:28.369292    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:10:28.369536    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:10:28.391037    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:10:28.391135    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:10:28.405516    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:10:28.405591    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:10:28.418654    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:10:28.418727    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:10:28.429210    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:10:28.429278    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:10:28.439636    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:10:28.439702    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:10:28.450192    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:10:28.450257    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:10:28.460389    5424 logs.go:276] 0 containers: []
	W0721 17:10:28.460403    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:10:28.460464    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:10:28.471702    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:10:28.471719    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:10:28.471724    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:10:28.491696    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:10:28.491707    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:10:28.503494    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:10:28.503510    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:10:28.517215    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:10:28.517225    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:10:28.528572    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:10:28.528582    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:10:28.539891    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:10:28.539904    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:10:28.552175    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:10:28.552185    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:10:28.556935    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:10:28.556942    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:10:28.593155    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:10:28.593169    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:10:28.617590    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:10:28.617598    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:10:28.631774    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:10:28.631791    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:10:28.647252    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:10:28.647266    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:10:28.661199    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:10:28.661212    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:10:28.677133    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:10:28.677145    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:10:28.694605    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:10:28.694616    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:10:28.705659    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:10:28.705669    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:10:28.746740    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:28.746838    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:28.747884    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:10:28.747890    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:10:28.762132    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:28.762145    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:10:28.762171    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:10:28.762175    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:28.762179    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:28.762183    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:28.762186    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:10:38.766108    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:10:43.768462    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:10:43.768795    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:10:43.822053    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:10:43.822173    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:10:43.858400    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:10:43.858473    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:10:43.875006    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:10:43.875079    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:10:43.886032    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:10:43.886107    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:10:43.899262    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:10:43.899344    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:10:43.910393    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:10:43.910456    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:10:43.920889    5424 logs.go:276] 0 containers: []
	W0721 17:10:43.920900    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:10:43.920954    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:10:43.932175    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:10:43.932194    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:10:43.932199    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:10:43.946978    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:10:43.946993    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:10:43.958714    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:10:43.958724    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:10:43.974212    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:10:43.974222    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:10:43.985835    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:10:43.985847    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:10:44.020058    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:10:44.020071    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:10:44.035098    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:10:44.035109    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:10:44.047038    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:10:44.047049    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:10:44.071642    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:10:44.071651    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:10:44.090701    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:10:44.090714    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:10:44.102654    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:10:44.102664    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:10:44.117630    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:10:44.117643    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:10:44.129310    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:10:44.129321    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:10:44.140547    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:10:44.140557    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:10:44.180115    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:44.180208    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:44.181263    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:10:44.181267    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:10:44.185286    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:10:44.185295    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:10:44.199192    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:10:44.199205    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:10:44.215946    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:44.215958    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:10:44.215982    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:10:44.215987    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:44.215991    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:44.215994    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:44.215997    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:10:54.218046    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:10:59.220166    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:10:59.220315    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:10:59.233478    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:10:59.233558    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:10:59.251718    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:10:59.251794    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:10:59.262489    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:10:59.262557    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:10:59.273145    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:10:59.273221    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:10:59.283476    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:10:59.283544    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:10:59.294604    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:10:59.294671    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:10:59.305813    5424 logs.go:276] 0 containers: []
	W0721 17:10:59.305824    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:10:59.305883    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:10:59.317269    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:10:59.317285    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:10:59.317291    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:10:59.328824    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:10:59.328836    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:10:59.341421    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:10:59.341436    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:10:59.358778    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:10:59.358789    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:10:59.384281    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:10:59.384297    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:10:59.426583    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:59.426683    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:59.427754    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:10:59.427762    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:10:59.432663    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:10:59.432672    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:10:59.449599    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:10:59.449614    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:10:59.465312    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:10:59.465326    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:10:59.481434    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:10:59.481447    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:10:59.497317    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:10:59.497328    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:10:59.517094    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:10:59.517106    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:10:59.530981    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:10:59.530991    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:10:59.546001    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:10:59.546015    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:10:59.559064    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:10:59.559075    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:10:59.596323    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:10:59.596335    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:10:59.613243    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:10:59.613254    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:10:59.625193    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:59.625204    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:10:59.625229    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:10:59.625234    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:59.625299    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:59.625304    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:59.625307    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:11:09.629218    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:14.631146    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:14.631213    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:11:14.643639    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:11:14.643686    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:11:14.655529    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:11:14.655578    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:11:14.666794    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:11:14.666862    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:11:14.677345    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:11:14.677411    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:11:14.688074    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:11:14.688131    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:11:14.698776    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:11:14.698836    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:11:14.709778    5424 logs.go:276] 0 containers: []
	W0721 17:11:14.709790    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:11:14.709840    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:11:14.722504    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:11:14.722520    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:11:14.722526    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:11:14.733599    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:11:14.733610    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:11:14.745097    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:11:14.745109    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:11:14.762727    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:11:14.762740    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:11:14.776567    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:11:14.776576    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:11:14.791183    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:11:14.791195    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:11:14.804568    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:11:14.804578    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:11:14.828046    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:11:14.828057    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:11:14.839337    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:11:14.839350    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:11:14.853046    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:11:14.853058    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:11:14.868235    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:11:14.868248    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:11:14.903072    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:11:14.903083    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:11:14.928828    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:11:14.928839    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:11:14.943934    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:11:14.943945    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:11:14.955824    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:11:14.955836    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:11:14.968787    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:11:14.968801    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:11:15.007432    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:11:15.007527    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:11:15.008523    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:11:15.008528    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:11:15.013010    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:11:15.013019    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:11:15.013044    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:11:15.013049    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:11:15.013053    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:11:15.013057    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:11:15.013060    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:11:25.016967    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:30.019193    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:30.019337    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:11:30.038539    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:11:30.038637    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:11:30.054548    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:11:30.054614    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:11:30.066712    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:11:30.066785    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:11:30.077295    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:11:30.077362    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:11:30.087369    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:11:30.087432    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:11:30.098145    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:11:30.098218    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:11:30.108192    5424 logs.go:276] 0 containers: []
	W0721 17:11:30.108201    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:11:30.108252    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:11:30.118500    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:11:30.118516    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:11:30.118522    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:11:30.156822    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:11:30.156914    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:11:30.157948    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:11:30.157953    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:11:30.195856    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:11:30.195870    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:11:30.207795    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:11:30.207806    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:11:30.232647    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:11:30.232663    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:11:30.237045    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:11:30.237052    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:11:30.258550    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:11:30.258564    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:11:30.273185    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:11:30.273195    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:11:30.291375    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:11:30.291390    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:11:30.305612    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:11:30.305626    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:11:30.321098    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:11:30.321113    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:11:30.335938    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:11:30.335948    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:11:30.353135    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:11:30.353153    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:11:30.366969    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:11:30.366985    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:11:30.381585    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:11:30.381599    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:11:30.394038    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:11:30.394049    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:11:30.406257    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:11:30.406269    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:11:30.417336    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:11:30.417346    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:11:30.417375    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:11:30.417380    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:11:30.417384    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:11:30.417389    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:11:30.417401    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:11:40.421230    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:45.423371    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:45.423455    5424 kubeadm.go:597] duration metric: took 4m7.428706208s to restartPrimaryControlPlane
	W0721 17:11:45.423530    5424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0721 17:11:45.423559    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0721 17:11:46.402807    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 17:11:46.408012    5424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 17:11:46.410819    5424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 17:11:46.413760    5424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 17:11:46.413767    5424 kubeadm.go:157] found existing configuration files:
	
	I0721 17:11:46.413793    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0721 17:11:46.416518    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 17:11:46.416543    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 17:11:46.419182    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0721 17:11:46.422180    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 17:11:46.422201    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 17:11:46.425095    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0721 17:11:46.427472    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 17:11:46.427493    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 17:11:46.430498    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0721 17:11:46.433367    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 17:11:46.433391    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 17:11:46.435968    5424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 17:11:46.451808    5424 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0721 17:11:46.451881    5424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 17:11:46.507364    5424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 17:11:46.507430    5424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 17:11:46.507488    5424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 17:11:46.555361    5424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 17:11:46.560552    5424 out.go:204]   - Generating certificates and keys ...
	I0721 17:11:46.560585    5424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 17:11:46.560618    5424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 17:11:46.560663    5424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0721 17:11:46.560695    5424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0721 17:11:46.560734    5424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0721 17:11:46.560762    5424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0721 17:11:46.560800    5424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0721 17:11:46.560838    5424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0721 17:11:46.560876    5424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0721 17:11:46.560923    5424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0721 17:11:46.560948    5424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0721 17:11:46.560978    5424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 17:11:46.661264    5424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 17:11:46.756377    5424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 17:11:46.993763    5424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 17:11:47.077298    5424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 17:11:47.104831    5424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 17:11:47.105312    5424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 17:11:47.105439    5424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 17:11:47.173727    5424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 17:11:47.177027    5424 out.go:204]   - Booting up control plane ...
	I0721 17:11:47.177073    5424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 17:11:47.177124    5424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 17:11:47.177163    5424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 17:11:47.177211    5424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 17:11:47.178890    5424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0721 17:11:51.682780    5424 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504583 seconds
	I0721 17:11:51.682874    5424 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 17:11:51.686573    5424 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 17:11:52.204410    5424 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 17:11:52.204762    5424 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-647000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 17:11:52.709081    5424 kubeadm.go:310] [bootstrap-token] Using token: 2c2jkx.5rjfu4kmd42cfnl9
	I0721 17:11:52.715182    5424 out.go:204]   - Configuring RBAC rules ...
	I0721 17:11:52.715245    5424 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 17:11:52.715299    5424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 17:11:52.718210    5424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 17:11:52.722224    5424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 17:11:52.723166    5424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 17:11:52.724070    5424 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 17:11:52.727395    5424 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 17:11:52.894096    5424 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 17:11:53.114058    5424 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 17:11:53.114524    5424 kubeadm.go:310] 
	I0721 17:11:53.114556    5424 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 17:11:53.114589    5424 kubeadm.go:310] 
	I0721 17:11:53.114631    5424 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 17:11:53.114635    5424 kubeadm.go:310] 
	I0721 17:11:53.114695    5424 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 17:11:53.114798    5424 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 17:11:53.114845    5424 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 17:11:53.114865    5424 kubeadm.go:310] 
	I0721 17:11:53.114896    5424 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 17:11:53.114927    5424 kubeadm.go:310] 
	I0721 17:11:53.115016    5424 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 17:11:53.115027    5424 kubeadm.go:310] 
	I0721 17:11:53.115054    5424 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 17:11:53.115092    5424 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 17:11:53.115172    5424 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 17:11:53.115177    5424 kubeadm.go:310] 
	I0721 17:11:53.115261    5424 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 17:11:53.115301    5424 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 17:11:53.115308    5424 kubeadm.go:310] 
	I0721 17:11:53.115354    5424 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2c2jkx.5rjfu4kmd42cfnl9 \
	I0721 17:11:53.115407    5424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:75e342b31cd1ca4bd3abd7fd07b163bfb3e06809b400a3ad400761b302299515 \
	I0721 17:11:53.115419    5424 kubeadm.go:310] 	--control-plane 
	I0721 17:11:53.115427    5424 kubeadm.go:310] 
	I0721 17:11:53.115477    5424 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 17:11:53.115480    5424 kubeadm.go:310] 
	I0721 17:11:53.115527    5424 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2c2jkx.5rjfu4kmd42cfnl9 \
	I0721 17:11:53.115589    5424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:75e342b31cd1ca4bd3abd7fd07b163bfb3e06809b400a3ad400761b302299515 
	I0721 17:11:53.115659    5424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 17:11:53.115671    5424 cni.go:84] Creating CNI manager for ""
	I0721 17:11:53.115679    5424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:11:53.119440    5424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0721 17:11:53.127453    5424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0721 17:11:53.130494    5424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0721 17:11:53.135231    5424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 17:11:53.135292    5424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 17:11:53.135292    5424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-647000 minikube.k8s.io/updated_at=2024_07_21T17_11_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=running-upgrade-647000 minikube.k8s.io/primary=true
	I0721 17:11:53.172883    5424 kubeadm.go:1113] duration metric: took 37.626583ms to wait for elevateKubeSystemPrivileges
	I0721 17:11:53.172935    5424 ops.go:34] apiserver oom_adj: -16
	I0721 17:11:53.172942    5424 kubeadm.go:394] duration metric: took 4m15.191702s to StartCluster
	I0721 17:11:53.172952    5424 settings.go:142] acquiring lock: {Name:mk7831d6c033f56ef11530d08a44142aeaa86fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:53.173042    5424 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:11:53.173413    5424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/kubeconfig: {Name:mk941eb06ccb0e2f7fcbae3a7de63e740b813743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:53.173627    5424 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:11:53.173632    5424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0721 17:11:53.173665    5424 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-647000"
	I0721 17:11:53.173678    5424 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-647000"
	W0721 17:11:53.173683    5424 addons.go:243] addon storage-provisioner should already be in state true
	I0721 17:11:53.173695    5424 host.go:66] Checking if "running-upgrade-647000" exists ...
	I0721 17:11:53.173715    5424 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:11:53.173738    5424 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-647000"
	I0721 17:11:53.173754    5424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-647000"
	I0721 17:11:53.173960    5424 retry.go:31] will retry after 573.413849ms: connect: dial unix /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/monitor: connect: connection refused
	I0721 17:11:53.174693    5424 kapi.go:59] client config for running-upgrade-647000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10591b790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 17:11:53.174811    5424 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-647000"
	W0721 17:11:53.174815    5424 addons.go:243] addon default-storageclass should already be in state true
	I0721 17:11:53.174823    5424 host.go:66] Checking if "running-upgrade-647000" exists ...
	I0721 17:11:53.175344    5424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 17:11:53.175348    5424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 17:11:53.175354    5424 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0721 17:11:53.177408    5424 out.go:177] * Verifying Kubernetes components...
	I0721 17:11:53.185376    5424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:53.263350    5424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 17:11:53.269377    5424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 17:11:53.270943    5424 api_server.go:52] waiting for apiserver process to appear ...
	I0721 17:11:53.270972    5424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:11:53.579019    5424 api_server.go:72] duration metric: took 405.388167ms to wait for apiserver process to appear ...
	I0721 17:11:53.579033    5424 api_server.go:88] waiting for apiserver healthz status ...
	I0721 17:11:53.579042    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:53.754341    5424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:53.758310    5424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 17:11:53.758322    5424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 17:11:53.758332    5424 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0721 17:11:53.793766    5424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 17:11:58.581069    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:58.581112    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:03.581356    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:03.581378    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:08.581632    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:08.581709    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:13.582152    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:13.582208    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:18.582918    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:18.582949    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0721 17:12:23.580440    5424 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0721 17:12:23.583292    5424 out.go:177] * Enabled addons: storage-provisioner
	I0721 17:12:23.583679    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:23.583692    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:23.596275    5424 addons.go:510] duration metric: took 30.423484666s for enable addons: enabled=[storage-provisioner]
	I0721 17:12:28.584617    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:28.584642    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:33.585837    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:33.585862    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:38.587381    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:38.587408    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:43.589345    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:43.589368    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:48.589712    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:48.589735    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:53.591748    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:53.591840    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:53.603012    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:12:53.603085    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:53.613729    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:12:53.613800    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:53.624136    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:12:53.624201    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:53.634456    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:12:53.634517    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:53.645152    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:12:53.645224    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:53.660816    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:12:53.660877    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:53.671747    5424 logs.go:276] 0 containers: []
	W0721 17:12:53.671759    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:53.671820    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:53.683329    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:12:53.683345    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:12:53.683351    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:12:53.697833    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:12:53.697844    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:12:53.711628    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:12:53.711642    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:12:53.723297    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:12:53.723309    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:12:53.740633    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:12:53.740643    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:12:53.752025    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:53.752037    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:12:53.770979    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:12:53.771074    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:12:53.792460    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:53.792467    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:53.828152    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:12:53.828163    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:12:53.841579    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:12:53.841590    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:12:53.857088    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:12:53.857098    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:12:53.869065    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:53.869077    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:53.894198    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:12:53.894206    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:53.905457    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:53.905471    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:53.909952    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:12:53.909962    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:12:53.909986    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:12:53.909989    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:12:53.909992    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:12:53.910004    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:12:53.910006    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:13:03.913956    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:08.916610    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:08.916766    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:08.930452    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:13:08.930531    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:08.941627    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:13:08.941698    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:08.951833    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:13:08.951902    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:08.962681    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:13:08.962751    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:08.977111    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:13:08.977185    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:08.987609    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:13:08.987679    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:08.997612    5424 logs.go:276] 0 containers: []
	W0721 17:13:08.997622    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:08.997681    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:09.014462    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:13:09.014477    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:13:09.014483    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:13:09.027635    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:13:09.027648    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:13:09.044809    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:13:09.044823    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:09.057426    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:09.057439    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:09.062222    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:09.062229    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:09.100058    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:13:09.100070    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:13:09.114239    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:13:09.114251    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:13:09.125884    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:13:09.125894    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:13:09.137623    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:09.137634    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:09.162385    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:09.162395    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:13:09.182269    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:09.182367    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:09.202862    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:13:09.202867    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:13:09.217734    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:13:09.217745    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:13:09.229480    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:13:09.229493    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:13:09.244552    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:09.244564    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:13:09.244589    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:13:09.244593    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:09.244596    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:09.244600    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:09.244603    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:13:19.248462    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:24.250731    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:24.251245    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:24.290864    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:13:24.290999    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:24.311354    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:13:24.311453    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:24.326411    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:13:24.326491    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:24.338569    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:13:24.338641    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:24.349936    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:13:24.350030    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:24.360631    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:13:24.360696    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:24.375496    5424 logs.go:276] 0 containers: []
	W0721 17:13:24.375512    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:24.375571    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:24.386198    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:13:24.386213    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:13:24.386219    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:13:24.400228    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:13:24.400244    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:13:24.413114    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:13:24.413125    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:13:24.428631    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:13:24.428641    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:13:24.441364    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:13:24.441376    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:13:24.461992    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:13:24.462003    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:13:24.473969    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:13:24.473980    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:24.485677    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:13:24.485691    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:13:24.504897    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:24.504908    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:24.509436    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:24.509442    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:24.543600    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:13:24.543611    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:13:24.556239    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:24.556250    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:24.581334    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:24.581356    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:13:24.601937    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:24.602031    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:24.622955    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:24.622964    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:13:24.622989    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:13:24.622992    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:24.622995    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:24.622998    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:24.623001    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:13:34.626905    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:39.629104    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:39.629274    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:39.648842    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:13:39.648925    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:39.663618    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:13:39.663691    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:39.675158    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:13:39.675226    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:39.689951    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:13:39.690018    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:39.700513    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:13:39.700582    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:39.711282    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:13:39.711347    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:39.721469    5424 logs.go:276] 0 containers: []
	W0721 17:13:39.721484    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:39.721544    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:39.731755    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:13:39.731769    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:13:39.731773    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:13:39.746184    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:13:39.746193    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:13:39.764493    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:13:39.764504    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:13:39.780811    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:13:39.780822    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:13:39.792474    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:39.792487    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:39.815766    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:13:39.815774    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:39.827423    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:39.827433    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:39.832172    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:39.832182    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:39.873369    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:13:39.873380    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:13:39.885231    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:13:39.885241    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:13:39.896826    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:13:39.896839    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:13:39.908697    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:13:39.908707    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:13:39.930752    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:39.930765    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:13:39.949326    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:39.949418    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:39.970287    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:39.970294    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:13:39.970318    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:13:39.970322    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:39.970326    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:39.970329    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:39.970333    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:13:49.974271    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:54.976770    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:54.976861    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:54.990856    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:13:54.990928    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:55.001874    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:13:55.001946    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:55.012413    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:13:55.012484    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:55.028578    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:13:55.028644    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:55.038994    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:13:55.039064    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:55.049199    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:13:55.049271    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:55.059909    5424 logs.go:276] 0 containers: []
	W0721 17:13:55.059919    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:55.059973    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:55.070664    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:13:55.070680    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:55.070685    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:55.096170    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:55.096178    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:55.131052    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:13:55.131064    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:13:55.145586    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:13:55.145597    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:13:55.159770    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:13:55.159781    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:13:55.171284    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:13:55.171295    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:13:55.186990    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:13:55.186999    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:13:55.198946    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:13:55.198957    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:13:55.217535    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:13:55.217544    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:55.229888    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:55.229898    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:13:55.250237    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:55.250329    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:55.270902    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:55.270907    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:55.275370    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:13:55.275376    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:13:55.288491    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:13:55.288502    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:13:55.300454    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:55.300466    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:13:55.300490    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:13:55.300496    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:55.300501    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:55.300558    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:55.300562    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:14:05.302615    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:10.304882    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:10.305103    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:10.323330    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:14:10.323421    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:10.338345    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:14:10.338425    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:10.350525    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:14:10.350602    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:10.361563    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:14:10.361641    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:10.372017    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:14:10.372090    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:10.382949    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:14:10.383025    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:10.392957    5424 logs.go:276] 0 containers: []
	W0721 17:14:10.392970    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:10.393038    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:10.404890    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:14:10.404905    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:10.404910    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:14:10.426167    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:10.426266    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:10.447932    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:14:10.447957    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:14:10.460649    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:14:10.460659    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:14:10.473652    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:14:10.473662    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:14:10.486099    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:14:10.486111    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:10.498816    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:14:10.498829    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:14:10.511216    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:14:10.511228    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:14:10.527022    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:10.527032    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:10.532480    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:10.532492    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:10.574416    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:14:10.574434    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:14:10.589334    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:10.589347    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:10.615289    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:14:10.615302    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:14:10.630443    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:14:10.630453    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:14:10.642920    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:14:10.642933    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:14:10.661791    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:14:10.661810    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:14:10.675124    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:10.675134    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:14:10.675161    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:14:10.675166    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:10.675170    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:10.675174    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:10.675176    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:14:20.678230    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:25.680409    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:25.680661    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:25.703585    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:14:25.703699    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:25.718711    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:14:25.718785    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:25.731662    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:14:25.731733    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:25.742793    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:14:25.742858    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:25.753261    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:14:25.753327    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:25.763952    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:14:25.764023    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:25.774684    5424 logs.go:276] 0 containers: []
	W0721 17:14:25.774695    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:25.774754    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:25.785117    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:14:25.785132    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:14:25.785137    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:14:25.796399    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:14:25.796413    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:14:25.807928    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:14:25.807940    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:14:25.825909    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:25.825918    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:14:25.845663    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:25.845755    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:25.866054    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:25.866061    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:25.871386    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:14:25.871396    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:14:25.885947    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:14:25.885960    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:14:25.902577    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:25.902587    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:25.927174    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:25.927183    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:25.961622    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:14:25.961637    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:14:25.976393    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:14:25.976406    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:14:25.988343    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:14:25.988354    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:14:26.008850    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:14:26.008861    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:14:26.020978    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:14:26.020991    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:14:26.032677    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:14:26.032687    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:26.043783    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:26.043795    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:14:26.043819    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:14:26.043825    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:26.043828    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:26.043833    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:26.043836    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:14:36.047346    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:41.050055    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:41.050204    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:41.062423    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:14:41.062500    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:41.073835    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:14:41.073910    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:41.089627    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:14:41.089705    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:41.100234    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:14:41.100302    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:41.113229    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:14:41.113302    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:41.125579    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:14:41.125651    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:41.136443    5424 logs.go:276] 0 containers: []
	W0721 17:14:41.136455    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:41.136514    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:41.146529    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:14:41.146550    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:14:41.146560    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:14:41.161734    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:14:41.161746    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:14:41.173453    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:41.173465    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:41.196544    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:14:41.196551    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:41.208398    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:14:41.208409    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:14:41.220240    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:14:41.220249    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:14:41.245272    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:14:41.245283    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:14:41.262044    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:14:41.262057    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:14:41.286657    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:14:41.286668    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:14:41.303746    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:14:41.303758    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:14:41.324181    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:41.324192    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:41.328640    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:14:41.328649    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:14:41.340420    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:14:41.340431    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:14:41.351732    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:41.351742    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:14:41.370330    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:41.370424    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:41.391267    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:41.391272    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:41.427084    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:41.427096    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:14:41.427123    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:14:41.427127    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:41.427132    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:41.427137    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:41.427140    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:14:51.431058    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:56.433241    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:56.433351    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:56.444027    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:14:56.444098    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:56.454426    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:14:56.454500    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:56.464855    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:14:56.464930    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:56.475308    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:14:56.475374    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:56.486101    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:14:56.486172    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:56.496773    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:14:56.496835    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:56.507439    5424 logs.go:276] 0 containers: []
	W0721 17:14:56.507452    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:56.507508    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:56.519116    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:14:56.519134    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:56.519140    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:56.594177    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:14:56.594188    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:14:56.610028    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:56.610038    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:56.615325    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:14:56.615334    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:14:56.629484    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:14:56.629494    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:14:56.652181    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:14:56.652193    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:14:56.665867    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:14:56.665878    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:14:56.677940    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:14:56.677951    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:14:56.699760    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:56.699770    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:56.724345    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:14:56.724360    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:56.735725    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:56.735736    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:14:56.755714    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:56.755814    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:56.776689    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:14:56.776695    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:14:56.790851    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:14:56.790866    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:14:56.802596    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:14:56.802608    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:14:56.815134    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:14:56.815144    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:14:56.826915    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:56.826925    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:14:56.826952    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:14:56.826956    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:56.826960    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:56.826963    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:56.826966    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:15:06.830833    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:11.833056    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:11.833258    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:11.851932    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:15:11.852048    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:11.866162    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:15:11.866229    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:11.878487    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:15:11.878560    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:11.889547    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:15:11.889612    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:11.900093    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:15:11.900164    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:11.912823    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:15:11.912888    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:11.923597    5424 logs.go:276] 0 containers: []
	W0721 17:15:11.923608    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:11.923663    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:11.934307    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:15:11.934326    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:15:11.934331    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:15:11.948289    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:15:11.948301    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:15:11.960524    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:15:11.960539    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:15:11.976283    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:11.976295    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:12.001145    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:15:12.001157    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:15:12.012913    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:15:12.012926    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:15:12.031017    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:15:12.031027    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:15:12.042681    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:15:12.042692    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:15:12.060570    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:12.060580    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:15:12.080911    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:12.081004    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:12.101898    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:15:12.101904    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:15:12.115485    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:15:12.115498    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:12.127031    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:12.127045    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:12.131828    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:12.131836    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:12.169404    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:15:12.169414    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:15:12.181340    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:15:12.181353    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:15:12.192884    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:12.192894    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:15:12.192919    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:15:12.192925    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:12.192929    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:12.192933    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:12.192935    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:15:22.196746    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:27.198802    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:27.198897    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:27.209705    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:15:27.209775    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:27.222819    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:15:27.222890    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:27.237901    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:15:27.237975    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:27.249854    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:15:27.249923    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:27.265250    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:15:27.265323    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:27.275612    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:15:27.275679    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:27.285960    5424 logs.go:276] 0 containers: []
	W0721 17:15:27.285974    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:27.286028    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:27.296906    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:15:27.296923    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:27.296927    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:27.335996    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:15:27.336009    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:15:27.349490    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:15:27.349502    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:15:27.361208    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:27.361219    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:27.385804    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:15:27.385812    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:27.397955    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:15:27.397966    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:15:27.412047    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:15:27.412057    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:15:27.424316    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:15:27.424326    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:15:27.436115    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:15:27.436128    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:15:27.448963    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:15:27.448973    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:15:27.461003    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:15:27.461014    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:15:27.476537    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:15:27.476551    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:15:27.496459    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:27.496470    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:15:27.516816    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:27.516909    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:27.537921    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:27.537926    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:27.542743    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:15:27.542751    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:15:27.560721    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:27.560732    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:15:27.560760    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:15:27.560765    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:27.560768    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:27.560773    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:27.560780    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:15:37.564643    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:42.566165    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:42.566284    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:42.579617    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:15:42.579694    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:42.592000    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:15:42.592077    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:42.603431    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:15:42.603504    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:42.620107    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:15:42.620180    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:42.637075    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:15:42.637146    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:42.656251    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:15:42.656331    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:42.666713    5424 logs.go:276] 0 containers: []
	W0721 17:15:42.666726    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:42.666782    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:42.677634    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:15:42.677651    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:15:42.677657    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:15:42.693392    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:15:42.693405    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:15:42.705829    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:15:42.705843    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:15:42.721554    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:15:42.721567    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:15:42.737149    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:15:42.737160    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:15:42.753960    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:15:42.753974    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:42.766299    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:42.766313    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:15:42.786361    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:42.786454    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:42.807021    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:15:42.807027    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:15:42.821564    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:15:42.821576    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:15:42.833251    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:15:42.833262    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:15:42.850580    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:42.850590    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:42.888673    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:15:42.888685    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:15:42.901070    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:15:42.901080    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:15:42.913894    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:42.913907    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:42.918956    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:42.918965    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:42.943039    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:42.943047    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:15:42.943072    5424 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0721 17:15:42.943077    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:42.943081    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	  Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:42.943086    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:42.943088    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:15:52.946940    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:57.949031    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:57.954599    5424 out.go:177] 
	W0721 17:15:57.959476    5424 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0721 17:15:57.959488    5424 out.go:239] * 
	* 
	W0721 17:15:57.960183    5424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:15:57.970475    5424 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-647000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-21 17:15:58.044195 -0700 PDT m=+3128.617817543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-647000 -n running-upgrade-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-647000 -n running-upgrade-647000: exit status 2 (15.724418667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-647000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-208000          | force-systemd-flag-208000 | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-181000              | force-systemd-env-181000  | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-181000           | force-systemd-env-181000  | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT | 21 Jul 24 17:06 PDT |
	| start   | -p docker-flags-007000                | docker-flags-007000       | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-208000             | force-systemd-flag-208000 | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-208000          | force-systemd-flag-208000 | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT | 21 Jul 24 17:06 PDT |
	| start   | -p cert-expiration-578000             | cert-expiration-578000    | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-007000 ssh               | docker-flags-007000       | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-007000 ssh               | docker-flags-007000       | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-007000                | docker-flags-007000       | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT | 21 Jul 24 17:06 PDT |
	| start   | -p cert-options-668000                | cert-options-668000       | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-668000 ssh               | cert-options-668000       | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-668000 -- sudo        | cert-options-668000       | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-668000                | cert-options-668000       | jenkins | v1.33.1 | 21 Jul 24 17:06 PDT | 21 Jul 24 17:06 PDT |
	| start   | -p running-upgrade-647000             | minikube                  | jenkins | v1.26.0 | 21 Jul 24 17:06 PDT | 21 Jul 24 17:07 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-647000             | running-upgrade-647000    | jenkins | v1.33.1 | 21 Jul 24 17:07 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-578000             | cert-expiration-578000    | jenkins | v1.33.1 | 21 Jul 24 17:09 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-578000             | cert-expiration-578000    | jenkins | v1.33.1 | 21 Jul 24 17:09 PDT | 21 Jul 24 17:09 PDT |
	| start   | -p kubernetes-upgrade-140000          | kubernetes-upgrade-140000 | jenkins | v1.33.1 | 21 Jul 24 17:09 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-140000          | kubernetes-upgrade-140000 | jenkins | v1.33.1 | 21 Jul 24 17:09 PDT | 21 Jul 24 17:09 PDT |
	| start   | -p kubernetes-upgrade-140000          | kubernetes-upgrade-140000 | jenkins | v1.33.1 | 21 Jul 24 17:09 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-140000          | kubernetes-upgrade-140000 | jenkins | v1.33.1 | 21 Jul 24 17:09 PDT | 21 Jul 24 17:09 PDT |
	| start   | -p stopped-upgrade-930000             | minikube                  | jenkins | v1.26.0 | 21 Jul 24 17:09 PDT | 21 Jul 24 17:10 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-930000 stop           | minikube                  | jenkins | v1.26.0 | 21 Jul 24 17:10 PDT | 21 Jul 24 17:10 PDT |
	| start   | -p stopped-upgrade-930000             | stopped-upgrade-930000    | jenkins | v1.33.1 | 21 Jul 24 17:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 17:10:46
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 17:10:46.986520    5580 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:10:46.986692    5580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:10:46.986696    5580 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:46.986698    5580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:10:46.986863    5580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:10:46.988078    5580 out.go:298] Setting JSON to false
	I0721 17:10:47.006696    5580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4209,"bootTime":1721602837,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:10:47.006760    5580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:10:47.011441    5580 out.go:177] * [stopped-upgrade-930000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:10:47.019428    5580 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:10:47.019469    5580 notify.go:220] Checking for updates...
	I0721 17:10:47.026387    5580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:10:47.029376    5580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:10:47.032403    5580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:10:47.035409    5580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:10:47.036713    5580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:10:47.039709    5580 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:10:47.043320    5580 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0721 17:10:47.046413    5580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:10:47.050370    5580 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:10:47.057379    5580 start.go:297] selected driver: qemu2
	I0721 17:10:47.057388    5580 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:10:47.057431    5580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:10:47.060012    5580 cni.go:84] Creating CNI manager for ""
	I0721 17:10:47.060031    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:10:47.060050    5580 start.go:340] cluster config:
	{Name:stopped-upgrade-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:10:47.060105    5580 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:10:47.067400    5580 out.go:177] * Starting "stopped-upgrade-930000" primary control-plane node in "stopped-upgrade-930000" cluster
	I0721 17:10:47.071346    5580 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0721 17:10:47.071358    5580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0721 17:10:47.071364    5580 cache.go:56] Caching tarball of preloaded images
	I0721 17:10:47.071419    5580 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:10:47.071424    5580 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0721 17:10:47.071484    5580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/config.json ...
	I0721 17:10:47.071788    5580 start.go:360] acquireMachinesLock for stopped-upgrade-930000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:10:47.071820    5580 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "stopped-upgrade-930000"
	I0721 17:10:47.071828    5580 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:10:47.071833    5580 fix.go:54] fixHost starting: 
	I0721 17:10:47.071931    5580 fix.go:112] recreateIfNeeded on stopped-upgrade-930000: state=Stopped err=<nil>
	W0721 17:10:47.071938    5580 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:10:47.076382    5580 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-930000" ...
	I0721 17:10:47.084351    5580 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:10:47.084413    5580 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50452-:22,hostfwd=tcp::50453-:2376,hostname=stopped-upgrade-930000 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/disk.qcow2
	I0721 17:10:47.131330    5580 main.go:141] libmachine: STDOUT: 
	I0721 17:10:47.131359    5580 main.go:141] libmachine: STDERR: 
	I0721 17:10:47.131371    5580 main.go:141] libmachine: Waiting for VM to start (ssh -p 50452 docker@127.0.0.1)...
	I0721 17:10:54.218046    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:10:59.220166    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:10:59.220315    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:10:59.233478    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:10:59.233558    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:10:59.251718    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:10:59.251794    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:10:59.262489    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:10:59.262557    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:10:59.273145    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:10:59.273221    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:10:59.283476    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:10:59.283544    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:10:59.294604    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:10:59.294671    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:10:59.305813    5424 logs.go:276] 0 containers: []
	W0721 17:10:59.305824    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:10:59.305883    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:10:59.317269    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:10:59.317285    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:10:59.317291    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:10:59.328824    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:10:59.328836    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:10:59.341421    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:10:59.341436    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:10:59.358778    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:10:59.358789    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:10:59.384281    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:10:59.384297    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:10:59.426583    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:59.426683    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:59.427754    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:10:59.427762    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:10:59.432663    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:10:59.432672    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:10:59.449599    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:10:59.449614    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:10:59.465312    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:10:59.465326    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:10:59.481434    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:10:59.481447    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:10:59.497317    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:10:59.497328    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:10:59.517094    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:10:59.517106    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:10:59.530981    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:10:59.530991    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:10:59.546001    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:10:59.546015    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:10:59.559064    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:10:59.559075    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:10:59.596323    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:10:59.596335    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:10:59.613243    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:10:59.613254    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:10:59.625193    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:59.625204    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:10:59.625229    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:10:59.625234    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:10:59.625299    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:10:59.625304    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:59.625307    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:11:06.992145    5580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/config.json ...
	I0721 17:11:06.992757    5580 machine.go:94] provisionDockerMachine start ...
	I0721 17:11:06.992951    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:06.993377    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:06.993389    5580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 17:11:07.076225    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0721 17:11:07.076257    5580 buildroot.go:166] provisioning hostname "stopped-upgrade-930000"
	I0721 17:11:07.076387    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.076652    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.076663    5580 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-930000 && echo "stopped-upgrade-930000" | sudo tee /etc/hostname
	I0721 17:11:07.150582    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-930000
	
	I0721 17:11:07.150640    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.150797    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.150807    5580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-930000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-930000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-930000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 17:11:07.213572    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 17:11:07.213585    5580 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1409/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1409/.minikube}
	I0721 17:11:07.213601    5580 buildroot.go:174] setting up certificates
	I0721 17:11:07.213606    5580 provision.go:84] configureAuth start
	I0721 17:11:07.213610    5580 provision.go:143] copyHostCerts
	I0721 17:11:07.213700    5580 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem, removing ...
	I0721 17:11:07.213710    5580 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem
	I0721 17:11:07.213830    5580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem (1078 bytes)
	I0721 17:11:07.214016    5580 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem, removing ...
	I0721 17:11:07.214020    5580 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem
	I0721 17:11:07.214074    5580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem (1123 bytes)
	I0721 17:11:07.214184    5580 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem, removing ...
	I0721 17:11:07.214187    5580 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem
	I0721 17:11:07.214233    5580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem (1675 bytes)
	I0721 17:11:07.214323    5580 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-930000 san=[127.0.0.1 localhost minikube stopped-upgrade-930000]
	I0721 17:11:07.324288    5580 provision.go:177] copyRemoteCerts
	I0721 17:11:07.324323    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 17:11:07.324331    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:11:07.359832    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 17:11:07.366770    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0721 17:11:07.373513    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 17:11:07.380731    5580 provision.go:87] duration metric: took 167.12475ms to configureAuth
	I0721 17:11:07.380740    5580 buildroot.go:189] setting minikube options for container-runtime
	I0721 17:11:07.380852    5580 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:11:07.380893    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.380978    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.380983    5580 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 17:11:07.446426    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 17:11:07.446435    5580 buildroot.go:70] root file system type: tmpfs
	I0721 17:11:07.446487    5580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 17:11:07.446540    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.446646    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.446681    5580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 17:11:07.514958    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 17:11:07.515016    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.515143    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.515162    5580 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 17:11:07.851418    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0721 17:11:07.851433    5580 machine.go:97] duration metric: took 858.691042ms to provisionDockerMachine
	I0721 17:11:07.851439    5580 start.go:293] postStartSetup for "stopped-upgrade-930000" (driver="qemu2")
	I0721 17:11:07.851446    5580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 17:11:07.851505    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 17:11:07.851518    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:11:07.884356    5580 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 17:11:07.885656    5580 info.go:137] Remote host: Buildroot 2021.02.12
	I0721 17:11:07.885664    5580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1409/.minikube/addons for local assets ...
	I0721 17:11:07.885744    5580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1409/.minikube/files for local assets ...
	I0721 17:11:07.885865    5580 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem -> 19112.pem in /etc/ssl/certs
	I0721 17:11:07.885989    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 17:11:07.889055    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem --> /etc/ssl/certs/19112.pem (1708 bytes)
	I0721 17:11:07.896083    5580 start.go:296] duration metric: took 44.640459ms for postStartSetup
	I0721 17:11:07.896096    5580 fix.go:56] duration metric: took 20.82484075s for fixHost
	I0721 17:11:07.896129    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.896236    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.896241    5580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 17:11:07.958536    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721607068.069196796
	
	I0721 17:11:07.958547    5580 fix.go:216] guest clock: 1721607068.069196796
	I0721 17:11:07.958551    5580 fix.go:229] Guest: 2024-07-21 17:11:08.069196796 -0700 PDT Remote: 2024-07-21 17:11:07.896098 -0700 PDT m=+20.938203001 (delta=173.098796ms)
	I0721 17:11:07.958564    5580 fix.go:200] guest clock delta is within tolerance: 173.098796ms
	I0721 17:11:07.958568    5580 start.go:83] releasing machines lock for "stopped-upgrade-930000", held for 20.887321041s
	I0721 17:11:07.958627    5580 ssh_runner.go:195] Run: cat /version.json
	I0721 17:11:07.958636    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:11:07.958646    5580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 17:11:07.958664    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	W0721 17:11:07.959187    5580 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50572->127.0.0.1:50452: write: connection reset by peer
	I0721 17:11:07.959205    5580 retry.go:31] will retry after 369.011209ms: ssh: handshake failed: write tcp 127.0.0.1:50572->127.0.0.1:50452: write: connection reset by peer
	W0721 17:11:08.385143    5580 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0721 17:11:08.385250    5580 ssh_runner.go:195] Run: systemctl --version
	I0721 17:11:08.388212    5580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 17:11:08.391706    5580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 17:11:08.391766    5580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0721 17:11:08.396856    5580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0721 17:11:08.414966    5580 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 17:11:08.414984    5580 start.go:495] detecting cgroup driver to use...
	I0721 17:11:08.415074    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 17:11:08.421510    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0721 17:11:08.424696    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 17:11:08.428118    5580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 17:11:08.428141    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 17:11:08.431020    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 17:11:08.433860    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0721 17:11:08.437376    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 17:11:08.440717    5580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 17:11:08.444214    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 17:11:08.447168    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 17:11:08.449961    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 17:11:08.453176    5580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 17:11:08.456378    5580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 17:11:08.459309    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:08.545196    5580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 17:11:08.551626    5580 start.go:495] detecting cgroup driver to use...
	I0721 17:11:08.551686    5580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 17:11:08.560910    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 17:11:08.565638    5580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 17:11:08.572303    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 17:11:08.576841    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 17:11:08.581271    5580 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0721 17:11:08.637205    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 17:11:08.642446    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 17:11:08.647795    5580 ssh_runner.go:195] Run: which cri-dockerd
	I0721 17:11:08.649116    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 17:11:08.652168    5580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 17:11:08.657185    5580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 17:11:08.726343    5580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 17:11:08.790401    5580 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 17:11:08.790467    5580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 17:11:08.795460    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:08.876288    5580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 17:11:10.034123    5580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157848792s)
	I0721 17:11:10.034191    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0721 17:11:10.039145    5580 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0721 17:11:10.045123    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0721 17:11:10.050442    5580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0721 17:11:10.110666    5580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0721 17:11:10.176496    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:10.240155    5580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0721 17:11:10.245522    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0721 17:11:10.250388    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:10.308185    5580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0721 17:11:10.346443    5580 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0721 17:11:10.346525    5580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0721 17:11:10.350536    5580 start.go:563] Will wait 60s for crictl version
	I0721 17:11:10.350599    5580 ssh_runner.go:195] Run: which crictl
	I0721 17:11:10.351901    5580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 17:11:10.366463    5580 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0721 17:11:10.366532    5580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0721 17:11:10.382435    5580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0721 17:11:09.629218    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:10.407277    5580 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0721 17:11:10.407341    5580 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0721 17:11:10.408600    5580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 17:11:10.412216    5580 kubeadm.go:883] updating cluster {Name:stopped-upgrade-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0721 17:11:10.412260    5580 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0721 17:11:10.412298    5580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0721 17:11:10.422477    5580 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0721 17:11:10.422485    5580 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0721 17:11:10.422530    5580 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0721 17:11:10.425692    5580 ssh_runner.go:195] Run: which lz4
	I0721 17:11:10.427081    5580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0721 17:11:10.428261    5580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0721 17:11:10.428270    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0721 17:11:11.374936    5580 docker.go:649] duration metric: took 947.911125ms to copy over tarball
	I0721 17:11:11.374991    5580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0721 17:11:14.631146    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:14.631213    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:11:14.643639    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:11:14.643686    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:11:14.655529    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:11:14.655578    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:11:14.666794    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:11:14.666862    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:11:14.677345    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:11:14.677411    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:11:14.688074    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:11:14.688131    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:11:14.698776    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:11:14.698836    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:11:14.709778    5424 logs.go:276] 0 containers: []
	W0721 17:11:14.709790    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:11:14.709840    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:11:14.722504    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:11:14.722520    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:11:14.722526    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:11:14.733599    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:11:14.733610    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:11:14.745097    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:11:14.745109    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:11:14.762727    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:11:14.762740    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:11:14.776567    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:11:14.776576    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:11:14.791183    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:11:14.791195    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:11:14.804568    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:11:14.804578    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:11:14.828046    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:11:14.828057    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:11:14.839337    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:11:14.839350    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:11:14.853046    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:11:14.853058    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:11:14.868235    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:11:14.868248    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:11:14.903072    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:11:14.903083    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:11:14.928828    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:11:14.928839    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:11:14.943934    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:11:14.943945    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:11:14.955824    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:11:14.955836    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:11:14.968787    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:11:14.968801    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:11:15.007432    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:11:15.007527    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:11:15.008523    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:11:15.008528    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:11:15.013010    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:11:15.013019    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:11:15.013044    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:11:15.013049    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:11:15.013053    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:11:15.013057    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:11:15.013060    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:11:12.537974    5580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.163001791s)
	I0721 17:11:12.537991    5580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0721 17:11:12.553879    5580 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0721 17:11:12.557246    5580 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0721 17:11:12.562623    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:12.623240    5580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 17:11:14.320488    5580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.697277083s)
	I0721 17:11:14.320592    5580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0721 17:11:14.333236    5580 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0721 17:11:14.333244    5580 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0721 17:11:14.333250    5580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0721 17:11:14.337495    5580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:14.339584    5580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:14.342008    5580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:14.342105    5580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:14.343499    5580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:14.343568    5580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:14.345183    5580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:14.345194    5580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:14.346753    5580 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:14.346768    5580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:14.348087    5580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:14.348114    5580 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0721 17:11:14.349868    5580 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:14.349924    5580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:14.350715    5580 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0721 17:11:14.352293    5580 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:16.557896    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:16.596146    5580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0721 17:11:16.596207    5580 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:16.596320    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:16.617309    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0721 17:11:16.647262    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:16.665226    5580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0721 17:11:16.665247    5580 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:16.665305    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:16.678355    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0721 17:11:16.701093    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:16.711795    5580 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0721 17:11:16.711814    5580 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:16.711873    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:16.722029    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0721 17:11:16.723404    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:16.733683    5580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0721 17:11:16.733703    5580 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:16.733763    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:16.744347    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0721 17:11:17.239912    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0721 17:11:17.248644    5580 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0721 17:11:17.248801    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:17.259682    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:17.261034    5580 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0721 17:11:17.261053    5580 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0721 17:11:17.261089    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0721 17:11:17.286561    5580 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0721 17:11:17.286585    5580 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:17.286648    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:17.288457    5580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0721 17:11:17.288470    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0721 17:11:17.288472    5580 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:17.288507    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:17.288568    5580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0721 17:11:17.299760    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0721 17:11:17.299887    5580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	W0721 17:11:17.300488    5580 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0721 17:11:17.300582    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:17.302860    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0721 17:11:17.302870    5580 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0721 17:11:17.302884    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0721 17:11:17.302897    5580 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0721 17:11:17.302909    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0721 17:11:17.316934    5580 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0721 17:11:17.316957    5580 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:17.317009    5580 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:17.323561    5580 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0721 17:11:17.323578    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0721 17:11:17.344619    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0721 17:11:17.344745    5580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0721 17:11:17.389571    5580 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0721 17:11:17.389592    5580 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0721 17:11:17.389598    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0721 17:11:17.389609    5580 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0721 17:11:17.389634    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0721 17:11:17.450024    5580 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0721 17:11:17.450054    5580 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0721 17:11:17.450060    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0721 17:11:17.683748    5580 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0721 17:11:17.683785    5580 cache_images.go:92] duration metric: took 3.35062225s to LoadCachedImages
	W0721 17:11:17.683830    5580 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0721 17:11:17.683836    5580 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0721 17:11:17.683886    5580 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-930000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 17:11:17.683952    5580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0721 17:11:17.697362    5580 cni.go:84] Creating CNI manager for ""
	I0721 17:11:17.697375    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:11:17.697380    5580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 17:11:17.697389    5580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-930000 NodeName:stopped-upgrade-930000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 17:11:17.697455    5580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-930000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 17:11:17.697503    5580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0721 17:11:17.700979    5580 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 17:11:17.701003    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0721 17:11:17.704157    5580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0721 17:11:17.709383    5580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 17:11:17.714282    5580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0721 17:11:17.719738    5580 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0721 17:11:17.720925    5580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 17:11:17.724553    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:17.786414    5580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 17:11:17.792518    5580 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000 for IP: 10.0.2.15
	I0721 17:11:17.792531    5580 certs.go:194] generating shared ca certs ...
	I0721 17:11:17.792539    5580 certs.go:226] acquiring lock for ca certs: {Name:mke4827a2590eed55d39c612acfba4d65d3007ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:17.792703    5580 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.key
	I0721 17:11:17.792755    5580 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.key
	I0721 17:11:17.792760    5580 certs.go:256] generating profile certs ...
	I0721 17:11:17.792833    5580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.key
	I0721 17:11:17.792852    5580 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key.75e49a33
	I0721 17:11:17.792863    5580 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt.75e49a33 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0721 17:11:17.893475    5580 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt.75e49a33 ...
	I0721 17:11:17.893486    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt.75e49a33: {Name:mk79f4899f7306d2c1b64bd6b3b7c05e91307157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:17.893790    5580 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key.75e49a33 ...
	I0721 17:11:17.893795    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key.75e49a33: {Name:mk68413454cdd12cdcb821263e9207a0c1ecc72d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:17.893940    5580 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt.75e49a33 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt
	I0721 17:11:17.894663    5580 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key.75e49a33 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key
	I0721 17:11:17.894857    5580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/proxy-client.key
	I0721 17:11:17.894996    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911.pem (1338 bytes)
	W0721 17:11:17.895026    5580 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911_empty.pem, impossibly tiny 0 bytes
	I0721 17:11:17.895032    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 17:11:17.895059    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem (1078 bytes)
	I0721 17:11:17.895086    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem (1123 bytes)
	I0721 17:11:17.895111    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem (1675 bytes)
	I0721 17:11:17.895374    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem (1708 bytes)
	I0721 17:11:17.895699    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 17:11:17.902307    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 17:11:17.909122    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 17:11:17.916341    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 17:11:17.923414    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0721 17:11:17.930288    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 17:11:17.936841    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 17:11:17.944279    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 17:11:17.951065    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911.pem --> /usr/share/ca-certificates/1911.pem (1338 bytes)
	I0721 17:11:17.957672    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem --> /usr/share/ca-certificates/19112.pem (1708 bytes)
	I0721 17:11:17.964801    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 17:11:17.971684    5580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 17:11:17.976773    5580 ssh_runner.go:195] Run: openssl version
	I0721 17:11:17.978775    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19112.pem && ln -fs /usr/share/ca-certificates/19112.pem /etc/ssl/certs/19112.pem"
	I0721 17:11:17.981559    5580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19112.pem
	I0721 17:11:17.982985    5580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:32 /usr/share/ca-certificates/19112.pem
	I0721 17:11:17.983008    5580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19112.pem
	I0721 17:11:17.984634    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19112.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 17:11:17.987367    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 17:11:17.990038    5580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:11:17.991454    5580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:24 /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:11:17.991473    5580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:11:17.993094    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 17:11:17.996189    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1911.pem && ln -fs /usr/share/ca-certificates/1911.pem /etc/ssl/certs/1911.pem"
	I0721 17:11:17.998888    5580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1911.pem
	I0721 17:11:18.000208    5580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:32 /usr/share/ca-certificates/1911.pem
	I0721 17:11:18.000227    5580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1911.pem
	I0721 17:11:18.001972    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1911.pem /etc/ssl/certs/51391683.0"
	I0721 17:11:18.005206    5580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 17:11:18.006614    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0721 17:11:18.008398    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0721 17:11:18.010235    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0721 17:11:18.012175    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0721 17:11:18.013898    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0721 17:11:18.015746    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0721 17:11:18.017512    5580 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:11:18.017581    5580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0721 17:11:18.027371    5580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0721 17:11:18.030499    5580 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0721 17:11:18.030503    5580 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0721 17:11:18.030525    5580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0721 17:11:18.033536    5580 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0721 17:11:18.033832    5580 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-930000" does not appear in /Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:11:18.033972    5580 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1409/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-930000" cluster setting kubeconfig missing "stopped-upgrade-930000" context setting]
	I0721 17:11:18.034156    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/kubeconfig: {Name:mk941eb06ccb0e2f7fcbae3a7de63e740b813743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:18.034605    5580 kapi.go:59] client config for stopped-upgrade-930000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a1b790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 17:11:18.034911    5580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0721 17:11:18.037448    5580 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-930000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0721 17:11:18.037453    5580 kubeadm.go:1160] stopping kube-system containers ...
	I0721 17:11:18.037490    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0721 17:11:18.048101    5580 docker.go:483] Stopping containers: [e507e67410b2 e51ba4e1d673 a5aa61dd685d ea215f4edd83 3b08d4c9ea9d 22353ec24f6d e619eab918db d445b75bd5c3]
	I0721 17:11:18.048162    5580 ssh_runner.go:195] Run: docker stop e507e67410b2 e51ba4e1d673 a5aa61dd685d ea215f4edd83 3b08d4c9ea9d 22353ec24f6d e619eab918db d445b75bd5c3
	I0721 17:11:18.059983    5580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0721 17:11:18.065309    5580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 17:11:18.068518    5580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 17:11:18.068523    5580 kubeadm.go:157] found existing configuration files:
	
	I0721 17:11:18.068546    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf
	I0721 17:11:18.071192    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 17:11:18.071225    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 17:11:18.073650    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf
	I0721 17:11:18.076650    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 17:11:18.076669    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 17:11:18.079459    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf
	I0721 17:11:18.081792    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 17:11:18.081812    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 17:11:18.084779    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf
	I0721 17:11:18.087745    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 17:11:18.087766    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 17:11:18.090134    5580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 17:11:18.093231    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.115504    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.432529    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.562545    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.585724    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.617486    5580 api_server.go:52] waiting for apiserver process to appear ...
	I0721 17:11:18.617568    5580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:11:19.119657    5580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:11:19.619574    5580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:11:19.626734    5580 api_server.go:72] duration metric: took 1.009276167s to wait for apiserver process to appear ...
	I0721 17:11:19.626746    5580 api_server.go:88] waiting for apiserver healthz status ...
	I0721 17:11:19.626760    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:25.016967    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:24.627281    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:24.627302    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:30.019193    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:30.019337    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:11:30.038539    5424 logs.go:276] 2 containers: [8e120b95a57b de94b8fa24b7]
	I0721 17:11:30.038637    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:11:30.054548    5424 logs.go:276] 2 containers: [9d5e7f35fab1 eca19629fad3]
	I0721 17:11:30.054614    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:11:30.066712    5424 logs.go:276] 1 containers: [d913a0607db5]
	I0721 17:11:30.066785    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:11:30.077295    5424 logs.go:276] 2 containers: [b470b81364c6 04cfba4b0b9b]
	I0721 17:11:30.077362    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:11:30.087369    5424 logs.go:276] 1 containers: [dd8f10bf3e93]
	I0721 17:11:30.087432    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:11:30.098145    5424 logs.go:276] 2 containers: [9c266780ddde e243b7ecf176]
	I0721 17:11:30.098218    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:11:30.108192    5424 logs.go:276] 0 containers: []
	W0721 17:11:30.108201    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:11:30.108252    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:11:30.118500    5424 logs.go:276] 2 containers: [2f810c28a5d8 9d1850e09eaa]
	I0721 17:11:30.118516    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:11:30.118522    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:11:30.156822    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:11:30.156914    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:11:30.157948    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:11:30.157953    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:11:30.195856    5424 logs.go:123] Gathering logs for kube-proxy [dd8f10bf3e93] ...
	I0721 17:11:30.195870    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8f10bf3e93"
	I0721 17:11:30.207795    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:11:30.207806    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:11:30.232647    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:11:30.232663    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:11:30.237045    5424 logs.go:123] Gathering logs for kube-apiserver [de94b8fa24b7] ...
	I0721 17:11:30.237052    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de94b8fa24b7"
	I0721 17:11:30.258550    5424 logs.go:123] Gathering logs for etcd [eca19629fad3] ...
	I0721 17:11:30.258564    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca19629fad3"
	I0721 17:11:30.273185    5424 logs.go:123] Gathering logs for kube-controller-manager [9c266780ddde] ...
	I0721 17:11:30.273195    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c266780ddde"
	I0721 17:11:30.291375    5424 logs.go:123] Gathering logs for etcd [9d5e7f35fab1] ...
	I0721 17:11:30.291390    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d5e7f35fab1"
	I0721 17:11:30.305612    5424 logs.go:123] Gathering logs for kube-scheduler [04cfba4b0b9b] ...
	I0721 17:11:30.305626    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04cfba4b0b9b"
	I0721 17:11:30.321098    5424 logs.go:123] Gathering logs for kube-controller-manager [e243b7ecf176] ...
	I0721 17:11:30.321113    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e243b7ecf176"
	I0721 17:11:30.335938    5424 logs.go:123] Gathering logs for storage-provisioner [9d1850e09eaa] ...
	I0721 17:11:30.335948    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1850e09eaa"
	I0721 17:11:30.353135    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:11:30.353153    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:11:30.366969    5424 logs.go:123] Gathering logs for kube-apiserver [8e120b95a57b] ...
	I0721 17:11:30.366985    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e120b95a57b"
	I0721 17:11:30.381585    5424 logs.go:123] Gathering logs for coredns [d913a0607db5] ...
	I0721 17:11:30.381599    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d913a0607db5"
	I0721 17:11:30.394038    5424 logs.go:123] Gathering logs for kube-scheduler [b470b81364c6] ...
	I0721 17:11:30.394049    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b470b81364c6"
	I0721 17:11:30.406257    5424 logs.go:123] Gathering logs for storage-provisioner [2f810c28a5d8] ...
	I0721 17:11:30.406269    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f810c28a5d8"
	I0721 17:11:30.417336    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:11:30.417346    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:11:30.417375    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:11:30.417380    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:11:30.417384    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:11:30.417389    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:11:30.417401    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:11:29.628588    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:29.628631    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:34.629032    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:34.629075    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:40.421230    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:39.629428    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:39.629447    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:45.423371    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:45.423455    5424 kubeadm.go:597] duration metric: took 4m7.428706208s to restartPrimaryControlPlane
	W0721 17:11:45.423530    5424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0721 17:11:45.423559    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0721 17:11:46.402807    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 17:11:46.408012    5424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 17:11:46.410819    5424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 17:11:46.413760    5424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 17:11:46.413767    5424 kubeadm.go:157] found existing configuration files:
	
	I0721 17:11:46.413793    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0721 17:11:46.416518    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 17:11:46.416543    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 17:11:46.419182    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0721 17:11:46.422180    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 17:11:46.422201    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 17:11:46.425095    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0721 17:11:46.427472    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 17:11:46.427493    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 17:11:46.430498    5424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0721 17:11:46.433367    5424 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 17:11:46.433391    5424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 17:11:46.435968    5424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 17:11:46.451808    5424 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0721 17:11:46.451881    5424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 17:11:46.507364    5424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 17:11:46.507430    5424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 17:11:46.507488    5424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 17:11:46.555361    5424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 17:11:46.560552    5424 out.go:204]   - Generating certificates and keys ...
	I0721 17:11:46.560585    5424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 17:11:46.560618    5424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 17:11:46.560663    5424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0721 17:11:46.560695    5424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0721 17:11:46.560734    5424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0721 17:11:46.560762    5424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0721 17:11:46.560800    5424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0721 17:11:46.560838    5424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0721 17:11:46.560876    5424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0721 17:11:46.560923    5424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0721 17:11:46.560948    5424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0721 17:11:46.560978    5424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 17:11:46.661264    5424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 17:11:44.629806    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:44.629842    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:46.756377    5424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 17:11:46.993763    5424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 17:11:47.077298    5424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 17:11:47.104831    5424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 17:11:47.105312    5424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 17:11:47.105439    5424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 17:11:47.173727    5424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 17:11:47.177027    5424 out.go:204]   - Booting up control plane ...
	I0721 17:11:47.177073    5424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 17:11:47.177124    5424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 17:11:47.177163    5424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 17:11:47.177211    5424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 17:11:47.178890    5424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0721 17:11:51.682780    5424 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504583 seconds
	I0721 17:11:51.682874    5424 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 17:11:51.686573    5424 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 17:11:49.630420    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:49.630467    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:52.204410    5424 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 17:11:52.204762    5424 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-647000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 17:11:52.709081    5424 kubeadm.go:310] [bootstrap-token] Using token: 2c2jkx.5rjfu4kmd42cfnl9
	I0721 17:11:52.715182    5424 out.go:204]   - Configuring RBAC rules ...
	I0721 17:11:52.715245    5424 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 17:11:52.715299    5424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 17:11:52.718210    5424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 17:11:52.722224    5424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 17:11:52.723166    5424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 17:11:52.724070    5424 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 17:11:52.727395    5424 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 17:11:52.894096    5424 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 17:11:53.114058    5424 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 17:11:53.114524    5424 kubeadm.go:310] 
	I0721 17:11:53.114556    5424 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 17:11:53.114589    5424 kubeadm.go:310] 
	I0721 17:11:53.114631    5424 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 17:11:53.114635    5424 kubeadm.go:310] 
	I0721 17:11:53.114695    5424 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 17:11:53.114798    5424 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 17:11:53.114845    5424 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 17:11:53.114865    5424 kubeadm.go:310] 
	I0721 17:11:53.114896    5424 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 17:11:53.114927    5424 kubeadm.go:310] 
	I0721 17:11:53.115016    5424 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 17:11:53.115027    5424 kubeadm.go:310] 
	I0721 17:11:53.115054    5424 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 17:11:53.115092    5424 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 17:11:53.115172    5424 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 17:11:53.115177    5424 kubeadm.go:310] 
	I0721 17:11:53.115261    5424 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 17:11:53.115301    5424 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 17:11:53.115308    5424 kubeadm.go:310] 
	I0721 17:11:53.115354    5424 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2c2jkx.5rjfu4kmd42cfnl9 \
	I0721 17:11:53.115407    5424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:75e342b31cd1ca4bd3abd7fd07b163bfb3e06809b400a3ad400761b302299515 \
	I0721 17:11:53.115419    5424 kubeadm.go:310] 	--control-plane 
	I0721 17:11:53.115427    5424 kubeadm.go:310] 
	I0721 17:11:53.115477    5424 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 17:11:53.115480    5424 kubeadm.go:310] 
	I0721 17:11:53.115527    5424 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2c2jkx.5rjfu4kmd42cfnl9 \
	I0721 17:11:53.115589    5424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:75e342b31cd1ca4bd3abd7fd07b163bfb3e06809b400a3ad400761b302299515 
	I0721 17:11:53.115659    5424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 17:11:53.115671    5424 cni.go:84] Creating CNI manager for ""
	I0721 17:11:53.115679    5424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:11:53.119440    5424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0721 17:11:53.127453    5424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0721 17:11:53.130494    5424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0721 17:11:53.135231    5424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 17:11:53.135292    5424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 17:11:53.135292    5424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-647000 minikube.k8s.io/updated_at=2024_07_21T17_11_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=running-upgrade-647000 minikube.k8s.io/primary=true
	I0721 17:11:53.172883    5424 kubeadm.go:1113] duration metric: took 37.626583ms to wait for elevateKubeSystemPrivileges
	I0721 17:11:53.172935    5424 ops.go:34] apiserver oom_adj: -16
	I0721 17:11:53.172942    5424 kubeadm.go:394] duration metric: took 4m15.191702s to StartCluster
	I0721 17:11:53.172952    5424 settings.go:142] acquiring lock: {Name:mk7831d6c033f56ef11530d08a44142aeaa86fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:53.173042    5424 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:11:53.173413    5424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/kubeconfig: {Name:mk941eb06ccb0e2f7fcbae3a7de63e740b813743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:53.173627    5424 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:11:53.173632    5424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0721 17:11:53.173665    5424 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-647000"
	I0721 17:11:53.173678    5424 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-647000"
	W0721 17:11:53.173683    5424 addons.go:243] addon storage-provisioner should already be in state true
	I0721 17:11:53.173695    5424 host.go:66] Checking if "running-upgrade-647000" exists ...
	I0721 17:11:53.173715    5424 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:11:53.173738    5424 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-647000"
	I0721 17:11:53.173754    5424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-647000"
	I0721 17:11:53.173960    5424 retry.go:31] will retry after 573.413849ms: connect: dial unix /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/monitor: connect: connection refused
	I0721 17:11:53.174693    5424 kapi.go:59] client config for running-upgrade-647000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/running-upgrade-647000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10591b790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 17:11:53.174811    5424 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-647000"
	W0721 17:11:53.174815    5424 addons.go:243] addon default-storageclass should already be in state true
	I0721 17:11:53.174823    5424 host.go:66] Checking if "running-upgrade-647000" exists ...
	I0721 17:11:53.175344    5424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 17:11:53.175348    5424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 17:11:53.175354    5424 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0721 17:11:53.177408    5424 out.go:177] * Verifying Kubernetes components...
	I0721 17:11:53.185376    5424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:53.263350    5424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 17:11:53.269377    5424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 17:11:53.270943    5424 api_server.go:52] waiting for apiserver process to appear ...
	I0721 17:11:53.270972    5424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:11:53.579019    5424 api_server.go:72] duration metric: took 405.388167ms to wait for apiserver process to appear ...
	I0721 17:11:53.579033    5424 api_server.go:88] waiting for apiserver healthz status ...
	I0721 17:11:53.579042    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:53.754341    5424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:53.758310    5424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 17:11:53.758322    5424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 17:11:53.758332    5424 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0721 17:11:53.793766    5424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 17:11:54.631641    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:54.631687    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:58.581069    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:58.581112    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:59.632922    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:59.632992    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:03.581356    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:03.581378    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:04.634677    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:04.634709    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:08.581632    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:08.581709    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:09.636612    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:09.636653    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:13.582152    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:13.582208    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:14.638765    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:14.638822    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:18.582918    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:18.582949    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:19.640991    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:19.641141    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:19.659001    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:19.659079    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:19.670103    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:19.670167    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:19.680762    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:19.680823    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:19.690823    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:19.690896    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:19.701439    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:19.701506    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:19.712513    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:19.712594    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:19.728140    5580 logs.go:276] 0 containers: []
	W0721 17:12:19.728158    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:19.728214    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:19.738831    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:19.738852    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:19.738857    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:19.764726    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:19.764736    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:19.778842    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:19.778853    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:19.794734    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:19.794745    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:19.809564    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:19.809576    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:19.820796    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:19.820808    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:19.925030    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:19.925042    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:19.938704    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:19.938715    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:19.952669    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:19.952679    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:19.977260    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:19.977269    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:19.988407    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:19.988416    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:20.000229    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:20.000240    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:20.012293    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:20.012306    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:20.029463    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:20.029473    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:20.040785    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:20.040795    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:20.052937    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:20.052949    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:20.090691    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:20.090700    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0721 17:12:23.580440    5424 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0721 17:12:23.583292    5424 out.go:177] * Enabled addons: storage-provisioner
	I0721 17:12:23.583679    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:23.583692    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:23.596275    5424 addons.go:510] duration metric: took 30.423484666s for enable addons: enabled=[storage-provisioner]
	I0721 17:12:22.594823    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:28.584617    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:28.584642    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:27.596988    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:27.597239    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:27.618843    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:27.618950    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:27.633089    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:27.633164    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:27.645233    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:27.645304    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:27.655905    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:27.655979    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:27.666105    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:27.666173    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:27.676669    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:27.676740    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:27.686925    5580 logs.go:276] 0 containers: []
	W0721 17:12:27.686936    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:27.686996    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:27.697477    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:27.697498    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:27.697503    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:27.722492    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:27.722503    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:27.737101    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:27.737112    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:27.753983    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:27.753994    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:27.765264    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:27.765279    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:27.804326    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:27.804338    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:27.819133    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:27.819143    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:27.831696    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:27.831709    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:27.845368    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:27.845380    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:27.849604    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:27.849613    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:27.863312    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:27.863322    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:27.875550    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:27.875561    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:27.901526    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:27.901535    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:27.939431    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:27.939438    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:27.953144    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:27.953152    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:27.967584    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:27.967595    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:27.979209    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:27.979222    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:30.491367    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:33.585837    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:33.585862    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:35.493717    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:35.494025    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:35.526464    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:35.526600    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:35.546676    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:35.546771    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:35.561359    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:35.561449    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:35.574199    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:35.574273    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:35.585120    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:35.585198    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:35.596638    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:35.596709    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:35.607679    5580 logs.go:276] 0 containers: []
	W0721 17:12:35.607691    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:35.607752    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:35.619123    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:35.619141    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:35.619156    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:35.631064    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:35.631074    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:35.670351    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:35.670359    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:35.674774    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:35.674783    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:35.699894    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:35.699906    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:35.712374    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:35.712385    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:35.723413    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:35.723423    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:35.735391    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:35.735401    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:35.771503    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:35.771516    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:35.785216    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:35.785226    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:35.799334    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:35.799346    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:35.814111    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:35.814121    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:35.831646    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:35.831655    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:35.846802    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:35.846811    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:35.872750    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:35.872759    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:35.886136    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:35.886147    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:35.898126    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:35.898137    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:38.587381    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:38.587408    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:38.412812    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:43.589345    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:43.589368    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:43.414966    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:43.415117    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:43.426243    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:43.426319    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:43.437000    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:43.437068    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:43.448819    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:43.448889    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:43.459362    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:43.459432    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:43.470255    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:43.470320    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:43.483797    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:43.483869    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:43.493678    5580 logs.go:276] 0 containers: []
	W0721 17:12:43.493687    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:43.493746    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:43.504270    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:43.504288    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:43.504293    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:43.540899    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:43.540908    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:43.555600    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:43.555611    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:43.570598    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:43.570610    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:43.605514    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:43.605525    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:43.617221    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:43.617232    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:43.642086    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:43.642094    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:43.653847    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:43.653857    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:43.658315    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:43.658321    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:43.674267    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:43.674276    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:43.689426    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:43.689438    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:43.706299    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:43.706312    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:43.727473    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:43.727484    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:43.751893    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:43.751903    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:43.765059    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:43.765069    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:43.779726    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:43.779738    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:43.791238    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:43.791250    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:46.310426    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:48.589712    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:48.589735    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:51.312746    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:51.312959    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:51.327782    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:51.327859    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:51.339747    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:51.339811    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:51.350625    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:51.350690    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:51.366498    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:51.366573    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:51.376876    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:51.376943    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:51.388492    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:51.388562    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:51.398781    5580 logs.go:276] 0 containers: []
	W0721 17:12:51.398793    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:51.398852    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:51.409621    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:51.409639    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:51.409644    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:51.422131    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:51.422143    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:51.468279    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:51.468294    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:51.482794    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:51.482805    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:51.494493    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:51.494504    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:51.506573    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:51.506608    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:51.521261    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:51.521274    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:51.559806    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:51.559818    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:51.574450    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:51.574464    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:51.599801    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:51.599811    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:51.614420    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:51.614431    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:51.618948    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:51.618956    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:51.629699    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:51.629711    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:51.641007    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:51.641017    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:51.654233    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:51.654245    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:51.666818    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:51.666830    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:51.683768    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:51.683779    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:53.591748    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:53.591840    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:53.603012    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:12:53.603085    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:53.613729    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:12:53.613800    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:53.624136    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:12:53.624201    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:53.634456    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:12:53.634517    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:53.645152    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:12:53.645224    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:53.660816    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:12:53.660877    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:53.671747    5424 logs.go:276] 0 containers: []
	W0721 17:12:53.671759    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:53.671820    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:53.683329    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:12:53.683345    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:12:53.683351    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:12:53.697833    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:12:53.697844    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:12:53.711628    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:12:53.711642    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:12:53.723297    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:12:53.723309    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:12:53.740633    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:12:53.740643    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:12:53.752025    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:53.752037    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:12:53.770979    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:12:53.771074    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:12:53.792460    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:53.792467    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:53.828152    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:12:53.828163    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:12:53.841579    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:12:53.841590    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:12:53.857088    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:12:53.857098    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:12:53.869065    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:53.869077    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:53.894198    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:12:53.894206    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:53.905457    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:53.905471    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:53.909952    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:12:53.909962    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:12:53.909986    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:12:53.909989    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:12:53.909992    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:12:53.910004    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:12:53.910006    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:12:54.209346    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:59.211489    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:59.211687    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:59.230985    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:59.231080    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:59.245272    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:59.245344    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:59.261194    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:59.261261    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:59.271751    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:59.271822    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:59.282338    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:59.282407    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:59.295588    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:59.295648    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:59.305827    5580 logs.go:276] 0 containers: []
	W0721 17:12:59.305840    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:59.305888    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:59.316527    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:59.316544    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:59.316549    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:59.320884    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:59.320893    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:59.332814    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:59.332824    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:59.344192    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:59.344206    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:59.369384    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:59.369394    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:59.383509    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:59.383519    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:59.408245    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:59.408258    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:59.419874    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:59.419886    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:59.434577    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:59.434588    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:59.471118    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:59.471126    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:59.506092    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:59.506104    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:59.520160    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:59.520172    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:59.532391    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:59.532401    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:59.549980    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:59.549992    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:59.572728    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:59.572741    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:59.584119    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:59.584130    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:59.598842    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:59.598852    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:03.913956    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:02.114715    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:08.916610    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:08.916766    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:08.930452    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:13:08.930531    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:08.941627    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:13:08.941698    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:08.951833    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:13:08.951902    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:08.962681    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:13:08.962751    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:08.977111    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:13:08.977185    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:08.987609    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:13:08.987679    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:08.997612    5424 logs.go:276] 0 containers: []
	W0721 17:13:08.997622    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:08.997681    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:09.014462    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:13:09.014477    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:13:09.014483    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:13:09.027635    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:13:09.027648    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:13:09.044809    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:13:09.044823    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:09.057426    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:09.057439    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:09.062222    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:09.062229    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:09.100058    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:13:09.100070    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:13:09.114239    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:13:09.114251    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:13:09.125884    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:13:09.125894    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:13:09.137623    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:09.137634    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:09.162385    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:09.162395    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:13:09.182269    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:09.182367    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:09.202862    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:13:09.202867    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:13:09.217734    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:13:09.217745    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:13:09.229480    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:13:09.229493    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:13:09.244552    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:09.244564    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:13:09.244589    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:13:09.244593    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:09.244596    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:09.244600    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:09.244603    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:13:07.117187    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:07.117428    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:07.141579    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:07.141695    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:07.158149    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:07.158230    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:07.171218    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:07.171290    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:07.181888    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:07.181958    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:07.195229    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:07.195298    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:07.205831    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:07.205903    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:07.216355    5580 logs.go:276] 0 containers: []
	W0721 17:13:07.216367    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:07.216430    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:07.227077    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:07.227094    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:07.227098    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:07.241094    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:07.241103    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:07.252652    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:07.252663    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:07.266262    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:07.266273    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:07.291476    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:07.291487    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:07.303719    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:07.303729    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:07.307809    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:07.307818    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:07.331872    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:07.331883    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:07.350310    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:07.350320    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:07.361620    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:07.361630    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:07.396789    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:07.396801    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:07.408053    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:07.408064    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:07.421614    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:07.421625    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:07.435743    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:07.435752    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:07.464642    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:07.464653    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:07.483489    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:07.483499    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:07.502156    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:07.502172    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:10.043541    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:15.045790    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:15.045960    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:15.059193    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:15.059294    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:15.074138    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:15.074236    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:15.084959    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:15.085031    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:15.095085    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:15.095151    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:15.105695    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:15.105772    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:15.117199    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:15.117287    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:15.129012    5580 logs.go:276] 0 containers: []
	W0721 17:13:15.129026    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:15.129098    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:15.140886    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:15.140905    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:15.140913    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:15.177067    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:15.177079    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:15.188897    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:15.188909    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:15.202662    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:15.202673    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:15.213810    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:15.213822    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:15.228216    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:15.228227    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:15.239747    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:15.239759    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:15.253797    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:15.253807    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:15.271882    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:15.271893    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:15.283740    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:15.283750    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:15.323509    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:15.323521    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:15.327844    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:15.327853    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:15.338874    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:15.338887    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:15.353650    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:15.353661    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:15.378498    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:15.378505    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:15.390114    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:15.390125    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:15.414173    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:15.414183    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:19.248462    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:17.930071    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:24.250731    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:24.251245    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:24.290864    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:13:24.290999    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:24.311354    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:13:24.311453    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:24.326411    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:13:24.326491    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:24.338569    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:13:24.338641    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:24.349936    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:13:24.350030    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:24.360631    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:13:24.360696    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:24.375496    5424 logs.go:276] 0 containers: []
	W0721 17:13:24.375512    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:24.375571    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:24.386198    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:13:24.386213    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:13:24.386219    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:13:24.400228    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:13:24.400244    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:13:24.413114    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:13:24.413125    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:13:24.428631    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:13:24.428641    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:13:24.441364    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:13:24.441376    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:13:24.461992    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:13:24.462003    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:13:24.473969    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:13:24.473980    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:24.485677    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:13:24.485691    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:13:24.504897    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:24.504908    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:24.509436    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:24.509442    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:24.543600    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:13:24.543611    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:13:24.556239    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:24.556250    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:24.581334    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:24.581356    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:13:24.601937    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:24.602031    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:24.622955    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:24.622964    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:13:24.622989    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:13:24.622992    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:24.622995    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:24.622998    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:24.623001    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:13:22.932626    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:22.932923    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:22.959563    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:22.959691    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:22.978897    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:22.978975    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:22.992279    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:22.992356    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:23.003067    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:23.003137    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:23.013828    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:23.013895    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:23.024824    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:23.024896    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:23.035819    5580 logs.go:276] 0 containers: []
	W0721 17:13:23.035831    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:23.035892    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:23.046155    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:23.046173    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:23.046179    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:23.050683    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:23.050690    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:23.087700    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:23.087711    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:23.113325    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:23.113335    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:23.127197    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:23.127211    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:23.139646    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:23.139657    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:23.165720    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:23.165731    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:23.184454    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:23.184465    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:23.197703    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:23.197717    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:23.209181    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:23.209197    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:23.223323    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:23.223332    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:23.240001    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:23.240013    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:23.252944    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:23.252957    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:23.290828    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:23.290838    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:23.307121    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:23.307133    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:23.323541    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:23.323551    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:23.341777    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:23.341788    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:25.857956    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:30.860363    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:30.860548    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:30.873706    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:30.873783    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:30.884364    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:30.884440    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:30.895087    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:30.895156    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:30.905524    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:30.905603    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:30.915816    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:30.915888    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:30.926390    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:30.926451    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:30.935863    5580 logs.go:276] 0 containers: []
	W0721 17:13:30.935874    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:30.935931    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:30.946236    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:30.946252    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:30.946258    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:30.962642    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:30.962653    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:30.987625    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:30.987636    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:31.026267    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:31.026275    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:31.030794    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:31.030803    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:31.042321    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:31.042331    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:31.053478    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:31.053490    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:31.067589    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:31.067603    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:31.082336    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:31.082347    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:31.110886    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:31.110897    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:31.125134    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:31.125145    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:31.138101    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:31.138112    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:31.155276    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:31.155287    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:31.169189    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:31.169199    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:31.181608    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:31.181619    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:31.193710    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:31.193723    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:31.229189    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:31.229200    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:34.626905    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:33.745304    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:39.629104    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:39.629274    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:39.648842    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:13:39.648925    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:39.663618    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:13:39.663691    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:39.675158    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:13:39.675226    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:39.689951    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:13:39.690018    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:39.700513    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:13:39.700582    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:39.711282    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:13:39.711347    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:39.721469    5424 logs.go:276] 0 containers: []
	W0721 17:13:39.721484    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:39.721544    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:39.731755    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:13:39.731769    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:13:39.731773    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:13:39.746184    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:13:39.746193    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:13:39.764493    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:13:39.764504    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:13:39.780811    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:13:39.780822    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:13:39.792474    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:39.792487    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:39.815766    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:13:39.815774    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:39.827423    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:39.827433    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:39.832172    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:39.832182    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:39.873369    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:13:39.873380    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:13:39.885231    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:13:39.885241    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:13:39.896826    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:13:39.896839    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:13:39.908697    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:13:39.908707    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:13:39.930752    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:39.930765    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:13:39.949326    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:39.949418    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:39.970287    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:39.970294    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:13:39.970318    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:13:39.970322    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:39.970326    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:39.970329    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:39.970333    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:13:38.746183    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:38.746340    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:38.759047    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:38.759118    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:38.770214    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:38.770284    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:38.780718    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:38.780796    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:38.791234    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:38.791301    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:38.801046    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:38.801115    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:38.811354    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:38.811422    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:38.821542    5580 logs.go:276] 0 containers: []
	W0721 17:13:38.821555    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:38.821618    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:38.831867    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:38.831887    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:38.831892    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:38.836411    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:38.836420    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:38.871447    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:38.871460    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:38.882718    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:38.882729    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:38.900764    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:38.900775    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:38.919394    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:38.919404    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:38.931633    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:38.931645    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:38.943246    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:38.943257    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:38.968371    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:38.968383    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:38.982915    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:38.982928    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:38.995167    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:38.995179    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:39.007342    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:39.007353    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:39.047311    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:39.047320    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:39.061765    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:39.061776    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:39.080647    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:39.080658    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:39.101494    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:39.101504    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:39.125695    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:39.125703    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:41.638991    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:46.641212    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:46.641432    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:46.664253    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:46.664376    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:46.679733    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:46.679816    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:46.692186    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:46.692259    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:46.703549    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:46.703627    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:46.720386    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:46.720451    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:46.730988    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:46.731063    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:46.741782    5580 logs.go:276] 0 containers: []
	W0721 17:13:46.741793    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:46.741851    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:46.752146    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:46.752166    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:46.752171    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:46.770301    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:46.770313    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:46.783350    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:46.783364    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:46.797956    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:46.797969    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:46.809151    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:46.809165    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:46.830317    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:46.830329    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:46.844446    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:46.844456    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:46.869242    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:46.869252    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:46.883971    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:46.883979    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:46.907313    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:46.907328    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:46.918640    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:46.918652    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:46.943083    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:46.943091    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:46.947583    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:46.947588    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:46.961411    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:46.961422    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:46.973465    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:46.973481    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:49.974271    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:46.985785    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:46.985794    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:47.022139    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:47.022148    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:49.558045    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:54.976770    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:54.976861    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:54.990856    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:13:54.990928    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:55.001874    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:13:55.001946    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:55.012413    5424 logs.go:276] 2 containers: [34af2ac54634 7ccf2a2019bd]
	I0721 17:13:55.012484    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:55.028578    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:13:55.028644    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:55.038994    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:13:55.039064    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:55.049199    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:13:55.049271    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:55.059909    5424 logs.go:276] 0 containers: []
	W0721 17:13:55.059919    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:55.059973    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:55.070664    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:13:55.070680    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:55.070685    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:55.096170    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:55.096178    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:55.131052    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:13:55.131064    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:13:55.145586    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:13:55.145597    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:13:55.159770    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:13:55.159781    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:13:55.171284    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:13:55.171295    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:13:55.186990    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:13:55.186999    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:13:55.198946    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:13:55.198957    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:13:55.217535    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:13:55.217544    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:55.229888    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:55.229898    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:13:55.250237    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:55.250329    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:55.270902    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:55.270907    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:55.275370    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:13:55.275376    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:13:55.288491    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:13:55.288502    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:13:55.300454    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:55.300466    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:13:55.300490    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:13:55.300496    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:13:55.300501    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:13:55.300558    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:13:55.300562    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:13:54.560580    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:54.560844    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:54.588275    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:54.588377    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:54.603662    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:54.603753    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:54.615891    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:54.615963    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:54.629787    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:54.629856    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:54.640214    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:54.640283    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:54.650882    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:54.650947    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:54.660634    5580 logs.go:276] 0 containers: []
	W0721 17:13:54.660646    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:54.660705    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:54.670617    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:54.670635    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:54.670640    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:54.681903    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:54.681914    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:54.699558    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:54.699568    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:54.704189    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:54.704199    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:54.740300    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:54.740311    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:54.754878    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:54.754889    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:54.778828    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:54.778839    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:54.796084    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:54.796096    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:54.807675    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:54.807687    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:54.819771    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:54.819787    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:54.844360    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:54.844373    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:54.855726    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:54.855735    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:54.880351    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:54.880359    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:54.894085    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:54.894095    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:54.930469    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:54.930478    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:54.944086    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:54.944096    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:54.956173    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:54.956185    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:57.470400    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:05.302615    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:02.472971    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:02.473340    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:02.508679    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:02.508809    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:02.526434    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:02.526510    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:02.539938    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:02.540015    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:02.551831    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:02.551915    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:02.563231    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:02.563305    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:02.576611    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:02.576680    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:02.587199    5580 logs.go:276] 0 containers: []
	W0721 17:14:02.587213    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:02.587277    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:02.597766    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:02.597785    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:02.597791    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:02.602166    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:02.602173    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:02.626946    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:02.626957    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:02.638837    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:02.638849    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:02.650503    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:02.650513    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:02.666012    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:02.666022    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:02.677983    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:02.677995    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:02.716046    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:02.716054    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:02.730919    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:02.730929    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:02.745242    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:02.745252    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:02.756733    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:02.756745    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:02.771264    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:02.771279    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:02.785748    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:02.785758    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:02.797630    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:02.797640    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:02.814588    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:02.814600    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:02.828027    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:02.828037    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:02.851890    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:02.851900    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:05.391173    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:10.304882    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:10.305103    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:10.323330    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:14:10.323421    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:10.338345    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:14:10.338425    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:10.350525    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:14:10.350602    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:10.361563    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:14:10.361641    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:10.372017    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:14:10.372090    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:10.382949    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:14:10.383025    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:10.392957    5424 logs.go:276] 0 containers: []
	W0721 17:14:10.392970    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:10.393038    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:10.404890    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:14:10.404905    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:10.404910    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:14:10.426167    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:10.426266    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:10.447932    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:14:10.447957    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:14:10.460649    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:14:10.460659    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:14:10.473652    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:14:10.473662    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:14:10.486099    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:14:10.486111    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:10.498816    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:14:10.498829    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:14:10.511216    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:14:10.511228    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:14:10.527022    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:10.527032    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:10.532480    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:10.532492    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:10.574416    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:14:10.574434    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:14:10.589334    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:10.589347    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:10.615289    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:14:10.615302    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:14:10.630443    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:14:10.630453    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:14:10.642920    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:14:10.642933    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:14:10.661791    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:14:10.661810    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:14:10.675124    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:10.675134    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:14:10.675161    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:14:10.675166    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:10.675170    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:10.675174    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:10.675176    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:14:10.392991    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:10.393086    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:10.404355    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:10.404433    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:10.416078    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:10.416152    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:10.427303    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:10.427366    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:10.438136    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:10.438200    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:10.449297    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:10.449366    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:10.460420    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:10.460497    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:10.471499    5580 logs.go:276] 0 containers: []
	W0721 17:14:10.471510    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:10.471569    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:10.483423    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:10.483443    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:10.483448    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:10.523669    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:10.523683    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:10.539954    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:10.539970    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:10.553220    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:10.553233    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:10.566296    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:10.566308    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:10.608470    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:10.608481    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:10.635083    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:10.635096    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:10.650970    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:10.650985    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:10.663649    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:10.663662    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:10.676904    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:10.676912    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:10.688556    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:10.688566    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:10.700092    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:10.700103    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:10.704463    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:10.704469    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:10.718915    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:10.718928    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:10.736503    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:10.736517    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:10.760492    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:10.760504    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:10.778561    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:10.778574    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:13.292602    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:20.678230    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:18.293270    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:18.293452    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:18.308588    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:18.308668    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:18.320142    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:18.320216    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:18.330504    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:18.330573    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:18.341731    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:18.341816    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:18.353114    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:18.353181    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:18.364269    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:18.364342    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:18.374765    5580 logs.go:276] 0 containers: []
	W0721 17:14:18.374780    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:18.374835    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:18.388737    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:18.388755    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:18.388760    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:18.424045    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:18.424056    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:18.438088    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:18.438101    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:18.463211    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:18.463221    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:18.477031    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:18.477043    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:18.488120    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:18.488131    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:18.499354    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:18.499368    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:18.517253    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:18.517264    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:18.521445    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:18.521452    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:18.544029    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:18.544039    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:18.556509    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:18.556520    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:18.573427    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:18.573438    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:18.588419    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:18.588429    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:18.626193    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:18.626206    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:18.638323    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:18.638337    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:18.649731    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:18.649742    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:18.664947    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:18.664959    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:21.185729    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:25.680409    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:25.680661    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:25.703585    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:14:25.703699    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:25.718711    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:14:25.718785    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:25.731662    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:14:25.731733    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:25.742793    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:14:25.742858    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:25.753261    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:14:25.753327    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:25.763952    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:14:25.764023    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:25.774684    5424 logs.go:276] 0 containers: []
	W0721 17:14:25.774695    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:25.774754    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:25.785117    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:14:25.785132    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:14:25.785137    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:14:25.796399    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:14:25.796413    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:14:25.807928    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:14:25.807940    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:14:25.825909    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:25.825918    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:14:25.845663    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:25.845755    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:25.866054    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:25.866061    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:25.871386    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:14:25.871396    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:14:25.885947    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:14:25.885960    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:14:25.902577    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:25.902587    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:25.927174    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:25.927183    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:25.961622    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:14:25.961637    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:14:25.976393    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:14:25.976406    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:14:25.988343    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:14:25.988354    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:14:26.008850    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:14:26.008861    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:14:26.020978    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:14:26.020991    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:14:26.032677    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:14:26.032687    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:26.043783    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:26.043795    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:14:26.043819    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:14:26.043825    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:26.043828    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:26.043833    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:26.043836    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:14:26.187885    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:26.188023    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:26.203122    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:26.203187    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:26.213825    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:26.213901    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:26.223976    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:26.224042    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:26.234422    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:26.234500    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:26.244780    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:26.244845    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:26.255079    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:26.255158    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:26.265720    5580 logs.go:276] 0 containers: []
	W0721 17:14:26.265731    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:26.265791    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:26.276649    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:26.276667    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:26.276672    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:26.301430    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:26.301442    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:26.313326    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:26.313337    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:26.332804    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:26.332813    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:26.348696    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:26.348708    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:26.362416    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:26.362427    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:26.376922    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:26.376930    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:26.388544    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:26.388559    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:26.426833    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:26.426842    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:26.440028    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:26.440042    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:26.451660    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:26.451671    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:26.463487    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:26.463497    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:26.488032    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:26.488039    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:26.492384    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:26.492393    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:26.506536    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:26.506546    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:26.517275    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:26.517285    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:26.529455    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:26.529465    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:29.064466    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:36.047346    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:34.066615    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:34.066980    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:34.101545    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:34.101677    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:34.119901    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:34.119989    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:34.133050    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:34.133124    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:34.146770    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:34.146838    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:34.157958    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:34.158035    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:34.168657    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:34.168722    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:34.179169    5580 logs.go:276] 0 containers: []
	W0721 17:14:34.179184    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:34.179244    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:34.190454    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:34.190474    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:34.190479    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:34.227279    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:34.227289    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:34.231537    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:34.231545    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:34.245765    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:34.245775    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:34.265488    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:34.265500    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:34.290075    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:34.290086    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:34.304645    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:34.304656    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:34.321651    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:34.321663    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:34.357435    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:34.357446    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:34.371737    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:34.371750    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:34.382760    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:34.382772    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:34.394262    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:34.394274    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:34.408919    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:34.408929    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:34.432682    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:34.432690    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:34.448041    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:34.448052    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:34.460451    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:34.460462    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:34.471966    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:34.471978    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:41.050055    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:41.050204    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:41.062423    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:14:41.062500    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:41.073835    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:14:41.073910    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:41.089627    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:14:41.089705    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:41.100234    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:14:41.100302    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:41.113229    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:14:41.113302    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:41.125579    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:14:41.125651    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:41.136443    5424 logs.go:276] 0 containers: []
	W0721 17:14:41.136455    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:41.136514    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:41.146529    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:14:41.146550    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:14:41.146560    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:14:41.161734    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:14:41.161746    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:14:41.173453    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:41.173465    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:41.196544    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:14:41.196551    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:41.208398    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:14:41.208409    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:14:41.220240    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:14:41.220249    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:14:41.245272    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:14:41.245283    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:14:41.262044    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:14:41.262057    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:14:41.286657    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:14:41.286668    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:14:41.303746    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:14:41.303758    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:14:41.324181    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:41.324192    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:41.328640    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:14:41.328649    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:14:41.340420    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:14:41.340431    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:14:41.351732    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:41.351742    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:14:41.370330    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:41.370424    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:41.391267    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:41.391272    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:41.427084    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:41.427096    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:14:41.427123    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:14:41.427127    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:41.427132    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:41.427137    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:41.427140    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:14:36.985185    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:41.986134    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:41.986455    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:42.022299    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:42.022442    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:42.047163    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:42.047251    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:42.060932    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:42.061007    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:42.072777    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:42.072853    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:42.083626    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:42.083697    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:42.095249    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:42.095323    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:42.106046    5580 logs.go:276] 0 containers: []
	W0721 17:14:42.106058    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:42.106119    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:42.119661    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:42.119680    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:42.119686    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:42.165743    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:42.165765    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:42.171184    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:42.171200    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:42.183008    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:42.183021    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:42.194449    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:42.194460    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:42.219596    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:42.219607    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:42.234117    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:42.234127    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:42.246242    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:42.246251    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:42.257438    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:42.257449    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:42.279460    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:42.279469    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:42.291479    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:42.291490    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:42.330882    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:42.330893    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:42.346502    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:42.346512    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:42.360753    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:42.360762    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:42.375177    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:42.375186    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:42.387220    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:42.387232    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:42.404854    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:42.404865    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:44.920378    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:51.431058    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:49.922742    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:49.923197    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:49.961341    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:49.961475    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:49.982610    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:49.982715    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:49.998127    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:49.998203    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:50.011180    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:50.011261    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:50.022549    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:50.022612    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:50.033609    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:50.033680    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:50.043646    5580 logs.go:276] 0 containers: []
	W0721 17:14:50.043659    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:50.043719    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:50.054181    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:50.054198    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:50.054202    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:50.073364    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:50.073375    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:50.088833    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:50.088843    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:50.100666    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:50.100676    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:50.123703    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:50.123713    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:50.128046    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:50.128054    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:50.142209    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:50.142219    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:50.166765    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:50.166775    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:50.180009    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:50.180022    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:50.192741    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:50.192752    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:50.204278    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:50.204288    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:50.221651    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:50.221661    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:50.257989    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:50.258000    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:50.293065    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:50.293077    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:50.307560    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:50.307571    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:50.318725    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:50.318736    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:50.332932    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:50.332943    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:56.433241    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:56.433351    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:56.444027    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:14:56.444098    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:56.454426    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:14:56.454500    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:56.464855    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:14:56.464930    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:56.475308    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:14:56.475374    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:56.486101    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:14:56.486172    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:56.496773    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:14:56.496835    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:56.507439    5424 logs.go:276] 0 containers: []
	W0721 17:14:56.507452    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:56.507508    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:56.519116    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:14:56.519134    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:56.519140    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:56.594177    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:14:56.594188    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:14:56.610028    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:56.610038    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:56.615325    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:14:56.615334    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:14:56.629484    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:14:56.629494    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:14:56.652181    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:14:56.652193    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:14:56.665867    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:14:56.665878    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:14:56.677940    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:14:56.677951    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:14:56.699760    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:56.699770    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:52.846626    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:56.724345    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:14:56.724360    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:56.735725    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:56.735736    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:14:56.755714    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:56.755814    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:56.776689    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:14:56.776695    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:14:56.790851    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:14:56.790866    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:14:56.802596    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:14:56.802608    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:14:56.815134    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:14:56.815144    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:14:56.826915    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:56.826925    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:14:56.826952    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:14:56.826956    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:14:56.826960    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:14:56.826963    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:14:56.826966    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:14:57.848940    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:57.849279    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:57.881750    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:57.881879    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:57.900622    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:57.900712    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:57.915059    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:57.915126    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:57.926977    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:57.927058    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:57.938189    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:57.938262    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:57.948652    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:57.948723    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:57.959432    5580 logs.go:276] 0 containers: []
	W0721 17:14:57.959443    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:57.959504    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:57.974306    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:57.974323    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:57.974328    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:58.012912    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:58.012922    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:58.052031    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:58.052041    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:58.064372    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:58.064384    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:58.078498    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:58.078509    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:58.092098    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:58.092108    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:58.103502    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:58.103513    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:58.114663    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:58.114675    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:58.119359    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:58.119367    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:58.149429    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:58.149444    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:58.164256    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:58.164268    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:58.176261    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:58.176276    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:58.194566    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:58.194580    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:58.210701    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:58.210718    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:58.224690    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:58.224700    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:58.244279    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:58.244288    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:58.258098    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:58.258109    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:00.783855    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:05.786089    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:05.786296    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:05.799726    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:15:05.799810    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:05.811526    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:15:05.811598    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:05.821933    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:15:05.822001    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:05.832155    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:15:05.832234    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:05.842293    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:15:05.842362    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:05.853219    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:15:05.853289    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:05.863773    5580 logs.go:276] 0 containers: []
	W0721 17:15:05.863788    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:05.863847    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:05.874356    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:15:05.874373    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:15:05.874379    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:15:05.899067    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:15:05.899079    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:15:05.913723    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:15:05.913735    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:15:05.927246    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:15:05.927260    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:15:05.938919    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:15:05.938933    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:15:05.953022    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:15:05.953036    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:15:05.968065    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:15:05.968075    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:15:05.980300    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:05.980313    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:15:06.018636    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:06.018648    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:06.023147    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:15:06.023154    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:15:06.036980    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:15:06.036993    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:15:06.048577    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:15:06.048589    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:15:06.062785    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:06.062797    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:06.088476    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:15:06.088484    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:06.099953    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:06.099965    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:06.133746    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:15:06.133760    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:15:06.148827    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:15:06.148838    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:15:06.830833    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:08.668369    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:11.833056    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:11.833258    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:11.851932    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:15:11.852048    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:11.866162    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:15:11.866229    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:11.878487    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:15:11.878560    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:11.889547    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:15:11.889612    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:11.900093    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:15:11.900164    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:11.912823    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:15:11.912888    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:11.923597    5424 logs.go:276] 0 containers: []
	W0721 17:15:11.923608    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:11.923663    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:11.934307    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:15:11.934326    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:15:11.934331    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:15:11.948289    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:15:11.948301    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:15:11.960524    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:15:11.960539    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:15:11.976283    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:11.976295    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:12.001145    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:15:12.001157    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:15:12.012913    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:15:12.012926    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:15:12.031017    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:15:12.031027    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:15:12.042681    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:15:12.042692    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:15:12.060570    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:12.060580    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:15:12.080911    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:12.081004    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:12.101898    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:15:12.101904    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:15:12.115485    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:15:12.115498    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:12.127031    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:12.127045    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:12.131828    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:12.131836    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:12.169404    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:15:12.169414    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:15:12.181340    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:15:12.181353    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:15:12.192884    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:12.192894    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:15:12.192919    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:15:12.192925    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:12.192929    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:12.192933    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:12.192935    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:15:13.670437    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:13.670587    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:13.684011    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:15:13.684099    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:13.694649    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:15:13.694712    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:13.710397    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:15:13.710464    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:13.724019    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:15:13.724085    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:13.734510    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:15:13.734604    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:13.745461    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:15:13.745520    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:13.755524    5580 logs.go:276] 0 containers: []
	W0721 17:15:13.755536    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:13.755596    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:13.766218    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:15:13.766239    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:15:13.766244    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:15:13.777856    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:13.777867    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:13.800857    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:15:13.800865    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:13.812200    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:13.812214    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:15:13.850312    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:15:13.850321    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:15:13.864401    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:15:13.864415    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:15:13.883175    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:15:13.883187    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:15:13.913283    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:15:13.913295    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:15:13.927388    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:15:13.927398    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:15:13.938685    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:15:13.938695    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:15:13.950007    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:13.950019    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:13.954546    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:15:13.954554    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:15:13.969139    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:15:13.969150    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:15:13.980648    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:15:13.980662    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:15:13.995293    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:13.995302    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:14.031547    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:15:14.031559    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:15:14.045610    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:15:14.045620    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:15:16.572407    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:21.572610    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:21.572684    5580 kubeadm.go:597] duration metric: took 4m3.548922666s to restartPrimaryControlPlane
	W0721 17:15:21.572719    5580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0721 17:15:21.572733    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0721 17:15:22.615001    5580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.042283167s)
	I0721 17:15:22.615358    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 17:15:22.620318    5580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 17:15:22.623215    5580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 17:15:22.625968    5580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 17:15:22.625975    5580 kubeadm.go:157] found existing configuration files:
	
	I0721 17:15:22.625997    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf
	I0721 17:15:22.628644    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 17:15:22.628666    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 17:15:22.631554    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf
	I0721 17:15:22.634597    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 17:15:22.634619    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 17:15:22.638129    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf
	I0721 17:15:22.640986    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 17:15:22.641005    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 17:15:22.643628    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf
	I0721 17:15:22.646384    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 17:15:22.646406    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 17:15:22.649304    5580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 17:15:22.667223    5580 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0721 17:15:22.667296    5580 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 17:15:22.720039    5580 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 17:15:22.720098    5580 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 17:15:22.720162    5580 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 17:15:22.768966    5580 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 17:15:22.772191    5580 out.go:204]   - Generating certificates and keys ...
	I0721 17:15:22.772229    5580 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 17:15:22.772265    5580 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 17:15:22.772319    5580 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0721 17:15:22.772471    5580 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0721 17:15:22.772512    5580 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0721 17:15:22.772540    5580 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0721 17:15:22.772570    5580 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0721 17:15:22.772602    5580 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0721 17:15:22.772666    5580 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0721 17:15:22.772721    5580 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0721 17:15:22.772752    5580 kubeadm.go:310] [certs] Using the existing "sa" key
	I0721 17:15:22.772782    5580 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 17:15:22.858685    5580 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 17:15:22.921503    5580 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 17:15:22.969918    5580 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 17:15:23.125124    5580 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 17:15:23.153447    5580 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 17:15:23.153831    5580 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 17:15:23.153877    5580 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 17:15:23.239147    5580 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 17:15:22.196746    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:23.246303    5580 out.go:204]   - Booting up control plane ...
	I0721 17:15:23.246444    5580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 17:15:23.246488    5580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 17:15:23.246518    5580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 17:15:23.246584    5580 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 17:15:23.246707    5580 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0721 17:15:27.241025    5580 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002662 seconds
	I0721 17:15:27.241106    5580 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 17:15:27.246144    5580 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 17:15:27.756147    5580 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 17:15:27.756347    5580 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-930000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 17:15:28.261093    5580 kubeadm.go:310] [bootstrap-token] Using token: twdtae.3ljsgcwo9tgeaxu2
	I0721 17:15:28.267287    5580 out.go:204]   - Configuring RBAC rules ...
	I0721 17:15:28.267351    5580 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 17:15:28.267407    5580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 17:15:28.269476    5580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 17:15:28.274119    5580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 17:15:28.275180    5580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 17:15:28.276008    5580 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 17:15:28.289820    5580 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 17:15:28.413582    5580 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 17:15:28.665207    5580 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 17:15:28.665679    5580 kubeadm.go:310] 
	I0721 17:15:28.665710    5580 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 17:15:28.665715    5580 kubeadm.go:310] 
	I0721 17:15:28.665759    5580 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 17:15:28.665763    5580 kubeadm.go:310] 
	I0721 17:15:28.665785    5580 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 17:15:28.665831    5580 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 17:15:28.665867    5580 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 17:15:28.665873    5580 kubeadm.go:310] 
	I0721 17:15:28.665901    5580 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 17:15:28.665904    5580 kubeadm.go:310] 
	I0721 17:15:28.665937    5580 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 17:15:28.665940    5580 kubeadm.go:310] 
	I0721 17:15:28.665973    5580 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 17:15:28.666019    5580 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 17:15:28.666075    5580 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 17:15:28.666082    5580 kubeadm.go:310] 
	I0721 17:15:28.666135    5580 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 17:15:28.666181    5580 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 17:15:28.666184    5580 kubeadm.go:310] 
	I0721 17:15:28.666232    5580 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token twdtae.3ljsgcwo9tgeaxu2 \
	I0721 17:15:28.666303    5580 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:75e342b31cd1ca4bd3abd7fd07b163bfb3e06809b400a3ad400761b302299515 \
	I0721 17:15:28.666319    5580 kubeadm.go:310] 	--control-plane 
	I0721 17:15:28.666324    5580 kubeadm.go:310] 
	I0721 17:15:28.666385    5580 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 17:15:28.666388    5580 kubeadm.go:310] 
	I0721 17:15:28.666430    5580 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token twdtae.3ljsgcwo9tgeaxu2 \
	I0721 17:15:28.666490    5580 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:75e342b31cd1ca4bd3abd7fd07b163bfb3e06809b400a3ad400761b302299515 
	I0721 17:15:28.666677    5580 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 17:15:28.666686    5580 cni.go:84] Creating CNI manager for ""
	I0721 17:15:28.666696    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:15:28.671061    5580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0721 17:15:28.679020    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0721 17:15:28.682340    5580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0721 17:15:28.688223    5580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 17:15:28.688266    5580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 17:15:28.688288    5580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-930000 minikube.k8s.io/updated_at=2024_07_21T17_15_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=stopped-upgrade-930000 minikube.k8s.io/primary=true
	I0721 17:15:28.726229    5580 kubeadm.go:1113] duration metric: took 37.99475ms to wait for elevateKubeSystemPrivileges
	I0721 17:15:28.726244    5580 ops.go:34] apiserver oom_adj: -16
	I0721 17:15:28.726248    5580 kubeadm.go:394] duration metric: took 4m10.715684s to StartCluster
	I0721 17:15:28.726258    5580 settings.go:142] acquiring lock: {Name:mk7831d6c033f56ef11530d08a44142aeaa86fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:15:28.726348    5580 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:15:28.726756    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/kubeconfig: {Name:mk941eb06ccb0e2f7fcbae3a7de63e740b813743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:15:28.726945    5580 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:15:28.726983    5580 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0721 17:15:28.727032    5580 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:15:28.727041    5580 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-930000"
	I0721 17:15:28.727057    5580 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-930000"
	W0721 17:15:28.727060    5580 addons.go:243] addon storage-provisioner should already be in state true
	I0721 17:15:28.727065    5580 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-930000"
	I0721 17:15:28.727073    5580 host.go:66] Checking if "stopped-upgrade-930000" exists ...
	I0721 17:15:28.727077    5580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-930000"
	I0721 17:15:28.727488    5580 retry.go:31] will retry after 566.889145ms: connect: dial unix /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/monitor: connect: connection refused
	I0721 17:15:28.728249    5580 kapi.go:59] client config for stopped-upgrade-930000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a1b790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 17:15:28.728372    5580 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-930000"
	W0721 17:15:28.728377    5580 addons.go:243] addon default-storageclass should already be in state true
	I0721 17:15:28.728383    5580 host.go:66] Checking if "stopped-upgrade-930000" exists ...
	I0721 17:15:28.728914    5580 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 17:15:28.728919    5580 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 17:15:28.728924    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:15:28.730973    5580 out.go:177] * Verifying Kubernetes components...
	I0721 17:15:28.739006    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:15:28.824660    5580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 17:15:28.829740    5580 api_server.go:52] waiting for apiserver process to appear ...
	I0721 17:15:28.829799    5580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:15:28.831670    5580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 17:15:28.835721    5580 api_server.go:72] duration metric: took 108.767417ms to wait for apiserver process to appear ...
	I0721 17:15:28.835732    5580 api_server.go:88] waiting for apiserver healthz status ...
	I0721 17:15:28.835738    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:29.301221    5580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:15:27.198802    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:27.198897    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:27.209705    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:15:27.209775    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:27.222819    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:15:27.222890    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:27.237901    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:15:27.237975    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:27.249854    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:15:27.249923    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:27.265250    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:15:27.265323    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:27.275612    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:15:27.275679    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:27.285960    5424 logs.go:276] 0 containers: []
	W0721 17:15:27.285974    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:27.286028    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:27.296906    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:15:27.296923    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:27.296927    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:27.335996    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:15:27.336009    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:15:27.349490    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:15:27.349502    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:15:27.361208    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:27.361219    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:27.385804    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:15:27.385812    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:27.397955    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:15:27.397966    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:15:27.412047    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:15:27.412057    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:15:27.424316    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:15:27.424326    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:15:27.436115    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:15:27.436128    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:15:27.448963    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:15:27.448973    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:15:27.461003    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:15:27.461014    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:15:27.476537    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:15:27.476551    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:15:27.496459    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:27.496470    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:15:27.516816    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:27.516909    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:27.537921    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:27.537926    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:27.542743    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:15:27.542751    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:15:27.560721    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:27.560732    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:15:27.560760    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:15:27.560765    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:27.560768    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:27.560773    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:27.560780    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:15:29.305175    5580 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 17:15:29.305183    5580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 17:15:29.305201    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:15:29.339724    5580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 17:15:33.837733    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:33.837779    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:37.564643    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:38.838034    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:38.838067    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:42.566165    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:42.566284    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:42.579617    5424 logs.go:276] 1 containers: [d57096f56066]
	I0721 17:15:42.579694    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:42.592000    5424 logs.go:276] 1 containers: [cd92551d008f]
	I0721 17:15:42.592077    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:42.603431    5424 logs.go:276] 4 containers: [345fbcd3daaf 9c90546ffec6 34af2ac54634 7ccf2a2019bd]
	I0721 17:15:42.603504    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:42.620107    5424 logs.go:276] 1 containers: [faf47f89606d]
	I0721 17:15:42.620180    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:42.637075    5424 logs.go:276] 1 containers: [0d9268095b8d]
	I0721 17:15:42.637146    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:42.656251    5424 logs.go:276] 1 containers: [5903667374c9]
	I0721 17:15:42.656331    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:42.666713    5424 logs.go:276] 0 containers: []
	W0721 17:15:42.666726    5424 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:42.666782    5424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:42.677634    5424 logs.go:276] 1 containers: [f63aa2e54ac3]
	I0721 17:15:42.677651    5424 logs.go:123] Gathering logs for etcd [cd92551d008f] ...
	I0721 17:15:42.677657    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd92551d008f"
	I0721 17:15:42.693392    5424 logs.go:123] Gathering logs for coredns [345fbcd3daaf] ...
	I0721 17:15:42.693405    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345fbcd3daaf"
	I0721 17:15:42.705829    5424 logs.go:123] Gathering logs for coredns [7ccf2a2019bd] ...
	I0721 17:15:42.705843    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ccf2a2019bd"
	I0721 17:15:42.721554    5424 logs.go:123] Gathering logs for kube-scheduler [faf47f89606d] ...
	I0721 17:15:42.721567    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf47f89606d"
	I0721 17:15:42.737149    5424 logs.go:123] Gathering logs for kube-proxy [0d9268095b8d] ...
	I0721 17:15:42.737160    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d9268095b8d"
	I0721 17:15:42.753960    5424 logs.go:123] Gathering logs for container status ...
	I0721 17:15:42.753974    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:42.766299    5424 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:42.766313    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0721 17:15:42.786361    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:42.786454    5424 logs.go:138] Found kubelet problem: Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:42.807021    5424 logs.go:123] Gathering logs for kube-apiserver [d57096f56066] ...
	I0721 17:15:42.807027    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57096f56066"
	I0721 17:15:42.821564    5424 logs.go:123] Gathering logs for coredns [9c90546ffec6] ...
	I0721 17:15:42.821576    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c90546ffec6"
	I0721 17:15:42.833251    5424 logs.go:123] Gathering logs for kube-controller-manager [5903667374c9] ...
	I0721 17:15:42.833262    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5903667374c9"
	I0721 17:15:42.850580    5424 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:42.850590    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:42.888673    5424 logs.go:123] Gathering logs for coredns [34af2ac54634] ...
	I0721 17:15:42.888685    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34af2ac54634"
	I0721 17:15:42.901070    5424 logs.go:123] Gathering logs for storage-provisioner [f63aa2e54ac3] ...
	I0721 17:15:42.901080    5424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f63aa2e54ac3"
	I0721 17:15:42.913894    5424 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:42.913907    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:42.918956    5424 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:42.918965    5424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:42.943039    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:42.943047    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 17:15:42.943072    5424 out.go:239] X Problems detected in kubelet:
	W0721 17:15:42.943077    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: W0722 00:07:55.270453    3429 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	W0721 17:15:42.943081    5424 out.go:239]   Jul 22 00:07:55 running-upgrade-647000 kubelet[3429]: E0722 00:07:55.270476    3429 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-647000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-647000' and this object
	I0721 17:15:42.943086    5424 out.go:304] Setting ErrFile to fd 2...
	I0721 17:15:42.943088    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:15:43.838320    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:43.838378    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:48.838822    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:48.838878    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:52.946940    5424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:53.839451    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:53.839507    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:57.949031    5424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:57.954599    5424 out.go:177] 
	W0721 17:15:57.959476    5424 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0721 17:15:57.959488    5424 out.go:239] * 
	W0721 17:15:57.960183    5424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:15:57.970475    5424 out.go:177] 
	I0721 17:15:58.840299    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:58.840344    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0721 17:15:59.158376    5580 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0721 17:15:59.161808    5580 out.go:177] * Enabled addons: storage-provisioner
	I0721 17:15:59.171620    5580 addons.go:510] duration metric: took 30.445478125s for enable addons: enabled=[storage-provisioner]
	I0721 17:16:03.841294    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:03.841333    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:08.842620    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:08.842663    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-22 00:06:55 UTC, ends at Mon 2024-07-22 00:16:14 UTC. --
	Jul 22 00:15:55 running-upgrade-647000 dockerd[2905]: time="2024-07-22T00:15:55.910898867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:15:55 running-upgrade-647000 dockerd[2905]: time="2024-07-22T00:15:55.911023320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:15:55 running-upgrade-647000 dockerd[2905]: time="2024-07-22T00:15:55.911048569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:15:55 running-upgrade-647000 dockerd[2905]: time="2024-07-22T00:15:55.911209063Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2a9e0bb536892605272157c4f2aadb4c45e0e51ac7cf54d79d6e4e0915c054b2 pid=15623 runtime=io.containerd.runc.v2
	Jul 22 00:15:55 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:15:55Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 22 00:15:56 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:15:56Z" level=error msg="ContainerStats resp: {0x40004fcb40 linux}"
	Jul 22 00:15:56 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:15:56Z" level=error msg="ContainerStats resp: {0x40001dc880 linux}"
	Jul 22 00:15:56 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:15:56Z" level=error msg="ContainerStats resp: {0x40004fdd00 linux}"
	Jul 22 00:15:56 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:15:56Z" level=error msg="ContainerStats resp: {0x40001dd200 linux}"
	Jul 22 00:15:56 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:15:56Z" level=error msg="ContainerStats resp: {0x400085a0c0 linux}"
	Jul 22 00:15:56 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:15:56Z" level=error msg="ContainerStats resp: {0x40007528c0 linux}"
	Jul 22 00:15:56 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:15:56Z" level=error msg="ContainerStats resp: {0x400085ac80 linux}"
	Jul 22 00:16:00 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:00Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 22 00:16:05 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:05Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 22 00:16:06 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:06Z" level=error msg="ContainerStats resp: {0x40006db680 linux}"
	Jul 22 00:16:06 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:06Z" level=error msg="ContainerStats resp: {0x40006dbb00 linux}"
	Jul 22 00:16:07 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:07Z" level=error msg="ContainerStats resp: {0x4000522a40 linux}"
	Jul 22 00:16:08 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:08Z" level=error msg="ContainerStats resp: {0x4000753980 linux}"
	Jul 22 00:16:08 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:08Z" level=error msg="ContainerStats resp: {0x4000753dc0 linux}"
	Jul 22 00:16:08 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:08Z" level=error msg="ContainerStats resp: {0x40001dc1c0 linux}"
	Jul 22 00:16:08 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:08Z" level=error msg="ContainerStats resp: {0x40007dd1c0 linux}"
	Jul 22 00:16:08 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:08Z" level=error msg="ContainerStats resp: {0x40007dd600 linux}"
	Jul 22 00:16:08 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:08Z" level=error msg="ContainerStats resp: {0x40001dce00 linux}"
	Jul 22 00:16:08 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:08Z" level=error msg="ContainerStats resp: {0x40001dd640 linux}"
	Jul 22 00:16:10 running-upgrade-647000 cri-dockerd[2749]: time="2024-07-22T00:16:10Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2a9e0bb536892       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   b6ae6c4f24879
	9aed650d90b63       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   9b4a99e6cc955
	345fbcd3daaf5       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9b4a99e6cc955
	9c90546ffec6e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b6ae6c4f24879
	f63aa2e54ac33       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   c0b10ed1fd44b
	0d9268095b8d9       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   b324f59d38476
	faf47f89606d0       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   a40058b5c737c
	cd92551d008f5       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   4649b299907f5
	5903667374c9d       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   33cb6ae659f9b
	d57096f56066b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   f71aeb4164bec
	
	
	==> coredns [2a9e0bb53689] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1134099448900870099.3378022480486248620. HINFO: read udp 10.244.0.2:60222->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1134099448900870099.3378022480486248620. HINFO: read udp 10.244.0.2:56278->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1134099448900870099.3378022480486248620. HINFO: read udp 10.244.0.2:53296->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1134099448900870099.3378022480486248620. HINFO: read udp 10.244.0.2:41576->10.0.2.3:53: i/o timeout
	
	
	==> coredns [345fbcd3daaf] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:59021->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:35155->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:39756->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:38688->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:37490->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:35650->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:53511->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:35834->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:56180->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387900060401593875.1039897606690797152. HINFO: read udp 10.244.0.3:53310->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9aed650d90b6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3613658978716021798.986397683288341357. HINFO: read udp 10.244.0.3:51212->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3613658978716021798.986397683288341357. HINFO: read udp 10.244.0.3:40151->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3613658978716021798.986397683288341357. HINFO: read udp 10.244.0.3:43599->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3613658978716021798.986397683288341357. HINFO: read udp 10.244.0.3:60143->10.0.2.3:53: i/o timeout
	
	
	==> coredns [9c90546ffec6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:59061->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:45168->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:59658->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:45799->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:55719->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:33039->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:58780->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:36074->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:37219->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6636022192198859995.4887689956932698535. HINFO: read udp 10.244.0.2:57402->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-647000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-647000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=running-upgrade-647000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T17_11_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:11:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-647000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:16:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:11:53 +0000   Mon, 22 Jul 2024 00:11:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:11:53 +0000   Mon, 22 Jul 2024 00:11:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:11:53 +0000   Mon, 22 Jul 2024 00:11:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:11:53 +0000   Mon, 22 Jul 2024 00:11:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-647000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc768efbf84a48de913bb1129f9a85e4
	  System UUID:                fc768efbf84a48de913bb1129f9a85e4
	  Boot ID:                    b90d0f89-6f86-421b-b387-378348c3d4df
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8wlsf                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 coredns-6d4b75cb6d-lszxc                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 etcd-running-upgrade-647000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-647000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-647000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-svvtm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-scheduler-running-upgrade-647000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m27s (x4 over 4m27s)  kubelet          Node running-upgrade-647000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x4 over 4m27s)  kubelet          Node running-upgrade-647000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x4 over 4m27s)  kubelet          Node running-upgrade-647000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node running-upgrade-647000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node running-upgrade-647000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node running-upgrade-647000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m21s                  kubelet          Node running-upgrade-647000 status is now: NodeReady
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s                   node-controller  Node running-upgrade-647000 event: Registered Node running-upgrade-647000 in Controller
	
	
	==> dmesg <==
	[  +1.620967] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.060443] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.058261] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.141947] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.076980] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.059122] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.276156] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[ +14.142982] systemd-fstab-generator[1957]: Ignoring "noauto" for root device
	[  +2.408080] systemd-fstab-generator[2236]: Ignoring "noauto" for root device
	[  +0.124598] systemd-fstab-generator[2269]: Ignoring "noauto" for root device
	[  +0.073036] systemd-fstab-generator[2280]: Ignoring "noauto" for root device
	[  +0.079536] systemd-fstab-generator[2295]: Ignoring "noauto" for root device
	[  +1.743772] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.133059] systemd-fstab-generator[2706]: Ignoring "noauto" for root device
	[  +0.063735] systemd-fstab-generator[2717]: Ignoring "noauto" for root device
	[  +0.073563] systemd-fstab-generator[2728]: Ignoring "noauto" for root device
	[  +0.076099] systemd-fstab-generator[2742]: Ignoring "noauto" for root device
	[  +2.429551] systemd-fstab-generator[2892]: Ignoring "noauto" for root device
	[  +5.422811] systemd-fstab-generator[3295]: Ignoring "noauto" for root device
	[  +0.988866] systemd-fstab-generator[3423]: Ignoring "noauto" for root device
	[ +17.001793] kauditd_printk_skb: 68 callbacks suppressed
	[Jul22 00:08] kauditd_printk_skb: 19 callbacks suppressed
	[Jul22 00:11] systemd-fstab-generator[10019]: Ignoring "noauto" for root device
	[  +5.627676] systemd-fstab-generator[10633]: Ignoring "noauto" for root device
	[  +0.454398] systemd-fstab-generator[10767]: Ignoring "noauto" for root device
	
	
	==> etcd [cd92551d008f] <==
	{"level":"info","ts":"2024-07-22T00:11:48.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-22T00:11:48.609Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-22T00:11:48.611Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T00:11:48.611Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-22T00:11:48.611Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-22T00:11:48.611Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T00:11:48.611Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T00:11:49.366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-647000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:11:49.367Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:11:49.368Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:11:49.368Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:11:49.368Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:11:49.368Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T00:11:49.368Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:11:49.368Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:11:49.368Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:11:49.376Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 00:16:14 up 9 min,  0 users,  load average: 0.13, 0.25, 0.16
	Linux running-upgrade-647000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d57096f56066] <==
	I0722 00:11:50.586762       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0722 00:11:50.614580       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0722 00:11:50.616392       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 00:11:50.616977       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0722 00:11:50.618418       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 00:11:50.618428       1 cache.go:39] Caches are synced for autoregister controller
	I0722 00:11:50.623487       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0722 00:11:51.353949       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0722 00:11:51.517130       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0722 00:11:51.518636       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0722 00:11:51.518646       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 00:11:51.647358       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 00:11:51.658239       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 00:11:51.682652       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0722 00:11:51.684831       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0722 00:11:51.685200       1 controller.go:611] quota admission added evaluator for: endpoints
	I0722 00:11:51.686575       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 00:11:52.650853       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0722 00:11:53.072440       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0722 00:11:53.075605       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0722 00:11:53.090191       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0722 00:11:53.130067       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 00:12:07.359249       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0722 00:12:07.411119       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0722 00:12:07.868726       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [5903667374c9] <==
	I0722 00:12:06.708777       1 shared_informer.go:262] Caches are synced for PVC protection
	I0722 00:12:06.708818       1 shared_informer.go:262] Caches are synced for expand
	I0722 00:12:06.708894       1 shared_informer.go:262] Caches are synced for crt configmap
	I0722 00:12:06.708983       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0722 00:12:06.709129       1 shared_informer.go:262] Caches are synced for taint
	I0722 00:12:06.709170       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0722 00:12:06.709220       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-647000. Assuming now as a timestamp.
	I0722 00:12:06.709249       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0722 00:12:06.709471       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0722 00:12:06.709703       1 event.go:294] "Event occurred" object="running-upgrade-647000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-647000 event: Registered Node running-upgrade-647000 in Controller"
	I0722 00:12:06.711827       1 shared_informer.go:262] Caches are synced for PV protection
	I0722 00:12:06.807357       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0722 00:12:06.809417       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0722 00:12:06.857896       1 shared_informer.go:262] Caches are synced for attach detach
	I0722 00:12:06.859066       1 shared_informer.go:262] Caches are synced for endpoint
	I0722 00:12:06.866036       1 shared_informer.go:262] Caches are synced for resource quota
	I0722 00:12:06.907284       1 shared_informer.go:262] Caches are synced for HPA
	I0722 00:12:06.911436       1 shared_informer.go:262] Caches are synced for resource quota
	I0722 00:12:07.331054       1 shared_informer.go:262] Caches are synced for garbage collector
	I0722 00:12:07.362343       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-svvtm"
	I0722 00:12:07.394882       1 shared_informer.go:262] Caches are synced for garbage collector
	I0722 00:12:07.394943       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0722 00:12:07.413946       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0722 00:12:07.710666       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lszxc"
	I0722 00:12:07.716166       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-8wlsf"
	
	
	==> kube-proxy [0d9268095b8d] <==
	I0722 00:12:07.857589       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0722 00:12:07.857616       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0722 00:12:07.857636       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0722 00:12:07.866987       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0722 00:12:07.866997       1 server_others.go:206] "Using iptables Proxier"
	I0722 00:12:07.867019       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0722 00:12:07.867148       1 server.go:661] "Version info" version="v1.24.1"
	I0722 00:12:07.867152       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:12:07.867388       1 config.go:317] "Starting service config controller"
	I0722 00:12:07.867398       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0722 00:12:07.867406       1 config.go:226] "Starting endpoint slice config controller"
	I0722 00:12:07.867408       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0722 00:12:07.867692       1 config.go:444] "Starting node config controller"
	I0722 00:12:07.867695       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0722 00:12:07.967625       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0722 00:12:07.967629       1 shared_informer.go:262] Caches are synced for service config
	I0722 00:12:07.967725       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [faf47f89606d] <==
	W0722 00:11:50.578135       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:11:50.578279       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:11:50.578168       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:11:50.578283       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:11:50.578179       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:11:50.578287       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 00:11:50.578190       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 00:11:50.578290       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 00:11:50.578202       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:11:50.578294       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:11:51.388752       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 00:11:51.388948       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 00:11:51.423055       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:11:51.423072       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:11:51.443295       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:11:51.443306       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 00:11:51.482206       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:11:51.482216       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:11:51.542057       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 00:11:51.542074       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 00:11:51.571248       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 00:11:51.571264       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 00:11:51.585830       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 00:11:51.585845       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0722 00:11:53.469232       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-22 00:06:55 UTC, ends at Mon 2024-07-22 00:16:14 UTC. --
	Jul 22 00:11:55 running-upgrade-647000 kubelet[10639]: E0722 00:11:55.309778   10639 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-647000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-647000"
	Jul 22 00:12:06 running-upgrade-647000 kubelet[10639]: I0722 00:12:06.719362   10639 topology_manager.go:200] "Topology Admit Handler"
	Jul 22 00:12:06 running-upgrade-647000 kubelet[10639]: I0722 00:12:06.771976   10639 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 22 00:12:06 running-upgrade-647000 kubelet[10639]: I0722 00:12:06.772061   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crr46\" (UniqueName: \"kubernetes.io/projected/5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b-kube-api-access-crr46\") pod \"storage-provisioner\" (UID: \"5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b\") " pod="kube-system/storage-provisioner"
	Jul 22 00:12:06 running-upgrade-647000 kubelet[10639]: I0722 00:12:06.772082   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b-tmp\") pod \"storage-provisioner\" (UID: \"5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b\") " pod="kube-system/storage-provisioner"
	Jul 22 00:12:06 running-upgrade-647000 kubelet[10639]: I0722 00:12:06.772397   10639 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 22 00:12:06 running-upgrade-647000 kubelet[10639]: E0722 00:12:06.875822   10639 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 22 00:12:06 running-upgrade-647000 kubelet[10639]: E0722 00:12:06.875842   10639 projected.go:192] Error preparing data for projected volume kube-api-access-crr46 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 22 00:12:06 running-upgrade-647000 kubelet[10639]: E0722 00:12:06.875888   10639 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b-kube-api-access-crr46 podName:5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b nodeName:}" failed. No retries permitted until 2024-07-22 00:12:07.375873755 +0000 UTC m=+14.313982470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-crr46" (UniqueName: "kubernetes.io/projected/5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b-kube-api-access-crr46") pod "storage-provisioner" (UID: "5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b") : configmap "kube-root-ca.crt" not found
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.364619   10639 topology_manager.go:200] "Topology Admit Handler"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: E0722 00:12:07.379716   10639 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: E0722 00:12:07.379734   10639 projected.go:192] Error preparing data for projected volume kube-api-access-crr46 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: E0722 00:12:07.379767   10639 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b-kube-api-access-crr46 podName:5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b nodeName:}" failed. No retries permitted until 2024-07-22 00:12:08.379758229 +0000 UTC m=+15.317866944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-crr46" (UniqueName: "kubernetes.io/projected/5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b-kube-api-access-crr46") pod "storage-provisioner" (UID: "5f226f1e-b4b6-45c5-b70c-f8fb8fedcb4b") : configmap "kube-root-ca.crt" not found
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.480102   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2b4e171-2bc6-41b5-a9d7-bd6582058f76-kube-proxy\") pod \"kube-proxy-svvtm\" (UID: \"e2b4e171-2bc6-41b5-a9d7-bd6582058f76\") " pod="kube-system/kube-proxy-svvtm"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.480132   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqxs9\" (UniqueName: \"kubernetes.io/projected/e2b4e171-2bc6-41b5-a9d7-bd6582058f76-kube-api-access-sqxs9\") pod \"kube-proxy-svvtm\" (UID: \"e2b4e171-2bc6-41b5-a9d7-bd6582058f76\") " pod="kube-system/kube-proxy-svvtm"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.480152   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2b4e171-2bc6-41b5-a9d7-bd6582058f76-xtables-lock\") pod \"kube-proxy-svvtm\" (UID: \"e2b4e171-2bc6-41b5-a9d7-bd6582058f76\") " pod="kube-system/kube-proxy-svvtm"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.480163   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b4e171-2bc6-41b5-a9d7-bd6582058f76-lib-modules\") pod \"kube-proxy-svvtm\" (UID: \"e2b4e171-2bc6-41b5-a9d7-bd6582058f76\") " pod="kube-system/kube-proxy-svvtm"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.717086   10639 topology_manager.go:200] "Topology Admit Handler"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.723843   10639 topology_manager.go:200] "Topology Admit Handler"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.881628   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f275d02-5662-4c90-b8b5-88bfbe93842b-config-volume\") pod \"coredns-6d4b75cb6d-lszxc\" (UID: \"5f275d02-5662-4c90-b8b5-88bfbe93842b\") " pod="kube-system/coredns-6d4b75cb6d-lszxc"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.881665   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ph9\" (UniqueName: \"kubernetes.io/projected/3fc24203-8e60-414f-947d-5bf605a60299-kube-api-access-l8ph9\") pod \"coredns-6d4b75cb6d-8wlsf\" (UID: \"3fc24203-8e60-414f-947d-5bf605a60299\") " pod="kube-system/coredns-6d4b75cb6d-8wlsf"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.881676   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zpwb\" (UniqueName: \"kubernetes.io/projected/5f275d02-5662-4c90-b8b5-88bfbe93842b-kube-api-access-5zpwb\") pod \"coredns-6d4b75cb6d-lszxc\" (UID: \"5f275d02-5662-4c90-b8b5-88bfbe93842b\") " pod="kube-system/coredns-6d4b75cb6d-lszxc"
	Jul 22 00:12:07 running-upgrade-647000 kubelet[10639]: I0722 00:12:07.881696   10639 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fc24203-8e60-414f-947d-5bf605a60299-config-volume\") pod \"coredns-6d4b75cb6d-8wlsf\" (UID: \"3fc24203-8e60-414f-947d-5bf605a60299\") " pod="kube-system/coredns-6d4b75cb6d-8wlsf"
	Jul 22 00:15:56 running-upgrade-647000 kubelet[10639]: I0722 00:15:56.844059   10639 scope.go:110] "RemoveContainer" containerID="34af2ac5463426bf14039264f46408dc183db7a084e46b63f2c9186f984b9289"
	Jul 22 00:15:56 running-upgrade-647000 kubelet[10639]: I0722 00:15:56.859558   10639 scope.go:110] "RemoveContainer" containerID="7ccf2a2019bd640a9eefd76806f69e2ffca512b93c37400439fe6daa8d999fcd"
	
	
	==> storage-provisioner [f63aa2e54ac3] <==
	I0722 00:12:08.831975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:12:08.837523       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:12:08.841584       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:12:08.847970       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:12:08.848469       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7930aea4-4795-4943-9280-d8ae405b2565", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-647000_10dca193-cdc8-48fe-beff-0a2ff7419196 became leader
	I0722 00:12:08.848509       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-647000_10dca193-cdc8-48fe-beff-0a2ff7419196!
	I0722 00:12:08.953275       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-647000_10dca193-cdc8-48fe-beff-0a2ff7419196!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-647000 -n running-upgrade-647000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-647000 -n running-upgrade-647000: exit status 2 (15.669138667s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-647000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-647000
--- FAIL: TestRunningBinaryUpgrade (600.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.74607175s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-140000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-140000" primary control-plane node in "kubernetes-upgrade-140000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-140000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:09:30.036675    5499 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:09:30.036814    5499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:09:30.036817    5499 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:30.036819    5499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:09:30.036929    5499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:09:30.038002    5499 out.go:298] Setting JSON to false
	I0721 17:09:30.054363    5499 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4133,"bootTime":1721602837,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:09:30.054440    5499 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:09:30.060313    5499 out.go:177] * [kubernetes-upgrade-140000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:09:30.068172    5499 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:09:30.068260    5499 notify.go:220] Checking for updates...
	I0721 17:09:30.077298    5499 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:09:30.078550    5499 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:09:30.081260    5499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:09:30.084291    5499 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:09:30.087271    5499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:09:30.090663    5499 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:09:30.090733    5499 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:09:30.090776    5499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:09:30.095256    5499 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:09:30.102307    5499 start.go:297] selected driver: qemu2
	I0721 17:09:30.102332    5499 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:09:30.102341    5499 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:09:30.104466    5499 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:09:30.107211    5499 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:09:30.110334    5499 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 17:09:30.110346    5499 cni.go:84] Creating CNI manager for ""
	I0721 17:09:30.110352    5499 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0721 17:09:30.110380    5499 start.go:340] cluster config:
	{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:09:30.113842    5499 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:09:30.121331    5499 out.go:177] * Starting "kubernetes-upgrade-140000" primary control-plane node in "kubernetes-upgrade-140000" cluster
	I0721 17:09:30.124224    5499 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 17:09:30.124242    5499 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0721 17:09:30.124256    5499 cache.go:56] Caching tarball of preloaded images
	I0721 17:09:30.124334    5499 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:09:30.124340    5499 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0721 17:09:30.124389    5499 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/kubernetes-upgrade-140000/config.json ...
	I0721 17:09:30.124400    5499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/kubernetes-upgrade-140000/config.json: {Name:mk1f0c75ba086322a8925593de156af8e858606a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:09:30.124685    5499 start.go:360] acquireMachinesLock for kubernetes-upgrade-140000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:09:30.124717    5499 start.go:364] duration metric: took 25.583µs to acquireMachinesLock for "kubernetes-upgrade-140000"
	I0721 17:09:30.124727    5499 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:09:30.124764    5499 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:09:30.131239    5499 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:09:30.146497    5499 start.go:159] libmachine.API.Create for "kubernetes-upgrade-140000" (driver="qemu2")
	I0721 17:09:30.146529    5499 client.go:168] LocalClient.Create starting
	I0721 17:09:30.146623    5499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:09:30.146651    5499 main.go:141] libmachine: Decoding PEM data...
	I0721 17:09:30.146662    5499 main.go:141] libmachine: Parsing certificate...
	I0721 17:09:30.146700    5499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:09:30.146723    5499 main.go:141] libmachine: Decoding PEM data...
	I0721 17:09:30.146729    5499 main.go:141] libmachine: Parsing certificate...
	I0721 17:09:30.147124    5499 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:09:30.285379    5499 main.go:141] libmachine: Creating SSH key...
	I0721 17:09:30.405768    5499 main.go:141] libmachine: Creating Disk image...
	I0721 17:09:30.405780    5499 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:09:30.405974    5499 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0721 17:09:30.415397    5499 main.go:141] libmachine: STDOUT: 
	I0721 17:09:30.415416    5499 main.go:141] libmachine: STDERR: 
	I0721 17:09:30.415476    5499 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2 +20000M
	I0721 17:09:30.423380    5499 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:09:30.423398    5499 main.go:141] libmachine: STDERR: 
	I0721 17:09:30.423411    5499 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0721 17:09:30.423419    5499 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:09:30.423435    5499 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:09:30.423466    5499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:8e:47:8c:d9:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0721 17:09:30.425040    5499 main.go:141] libmachine: STDOUT: 
	I0721 17:09:30.425054    5499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:09:30.425070    5499 client.go:171] duration metric: took 278.544458ms to LocalClient.Create
	I0721 17:09:32.427129    5499 start.go:128] duration metric: took 2.30241425s to createHost
	I0721 17:09:32.427162    5499 start.go:83] releasing machines lock for "kubernetes-upgrade-140000", held for 2.302500375s
	W0721 17:09:32.427208    5499 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:09:32.437104    5499 out.go:177] * Deleting "kubernetes-upgrade-140000" in qemu2 ...
	W0721 17:09:32.454409    5499 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:09:32.454433    5499 start.go:729] Will try again in 5 seconds ...
	I0721 17:09:37.456509    5499 start.go:360] acquireMachinesLock for kubernetes-upgrade-140000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:09:37.456949    5499 start.go:364] duration metric: took 359.917µs to acquireMachinesLock for "kubernetes-upgrade-140000"
	I0721 17:09:37.457032    5499 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:09:37.457233    5499 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:09:37.460946    5499 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:09:37.502430    5499 start.go:159] libmachine.API.Create for "kubernetes-upgrade-140000" (driver="qemu2")
	I0721 17:09:37.502493    5499 client.go:168] LocalClient.Create starting
	I0721 17:09:37.502617    5499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:09:37.502683    5499 main.go:141] libmachine: Decoding PEM data...
	I0721 17:09:37.502698    5499 main.go:141] libmachine: Parsing certificate...
	I0721 17:09:37.502746    5499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:09:37.502791    5499 main.go:141] libmachine: Decoding PEM data...
	I0721 17:09:37.502800    5499 main.go:141] libmachine: Parsing certificate...
	I0721 17:09:37.503302    5499 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:09:37.649210    5499 main.go:141] libmachine: Creating SSH key...
	I0721 17:09:37.694104    5499 main.go:141] libmachine: Creating Disk image...
	I0721 17:09:37.694114    5499 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:09:37.694282    5499 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0721 17:09:37.703470    5499 main.go:141] libmachine: STDOUT: 
	I0721 17:09:37.703490    5499 main.go:141] libmachine: STDERR: 
	I0721 17:09:37.703547    5499 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2 +20000M
	I0721 17:09:37.711431    5499 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:09:37.711445    5499 main.go:141] libmachine: STDERR: 
	I0721 17:09:37.711460    5499 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0721 17:09:37.711466    5499 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:09:37.711479    5499 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:09:37.711509    5499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:78:c0:5b:a0:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0721 17:09:37.713080    5499 main.go:141] libmachine: STDOUT: 
	I0721 17:09:37.713105    5499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:09:37.713118    5499 client.go:171] duration metric: took 210.626041ms to LocalClient.Create
	I0721 17:09:39.715271    5499 start.go:128] duration metric: took 2.258064959s to createHost
	I0721 17:09:39.715342    5499 start.go:83] releasing machines lock for "kubernetes-upgrade-140000", held for 2.258431875s
	W0721 17:09:39.715732    5499 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:09:39.726318    5499 out.go:177] 
	W0721 17:09:39.731445    5499 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:09:39.731471    5499 out.go:239] * 
	* 
	W0721 17:09:39.733832    5499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:09:39.745282    5499 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-140000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-140000: (2.903322s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-140000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-140000 status --format={{.Host}}: exit status 7 (35.952209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.183056958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-140000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-140000" primary control-plane node in "kubernetes-upgrade-140000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-140000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-140000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:09:42.724943    5533 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:09:42.725057    5533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:09:42.725061    5533 out.go:304] Setting ErrFile to fd 2...
	I0721 17:09:42.725063    5533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:09:42.725184    5533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:09:42.726193    5533 out.go:298] Setting JSON to false
	I0721 17:09:42.742313    5533 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4145,"bootTime":1721602837,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:09:42.742390    5533 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:09:42.747622    5533 out.go:177] * [kubernetes-upgrade-140000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:09:42.754589    5533 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:09:42.754649    5533 notify.go:220] Checking for updates...
	I0721 17:09:42.761598    5533 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:09:42.764517    5533 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:09:42.767565    5533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:09:42.770589    5533 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:09:42.773493    5533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:09:42.776791    5533 config.go:182] Loaded profile config "kubernetes-upgrade-140000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0721 17:09:42.777056    5533 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:09:42.781578    5533 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:09:42.788538    5533 start.go:297] selected driver: qemu2
	I0721 17:09:42.788547    5533 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:09:42.788607    5533 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:09:42.790931    5533 cni.go:84] Creating CNI manager for ""
	I0721 17:09:42.790947    5533 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:09:42.790974    5533 start.go:340] cluster config:
	{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-140000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:09:42.794514    5533 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:09:42.799527    5533 out.go:177] * Starting "kubernetes-upgrade-140000" primary control-plane node in "kubernetes-upgrade-140000" cluster
	I0721 17:09:42.803540    5533 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 17:09:42.803554    5533 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0721 17:09:42.803564    5533 cache.go:56] Caching tarball of preloaded images
	I0721 17:09:42.803619    5533 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:09:42.803624    5533 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0721 17:09:42.803676    5533 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/kubernetes-upgrade-140000/config.json ...
	I0721 17:09:42.804106    5533 start.go:360] acquireMachinesLock for kubernetes-upgrade-140000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:09:42.804135    5533 start.go:364] duration metric: took 22.666µs to acquireMachinesLock for "kubernetes-upgrade-140000"
	I0721 17:09:42.804145    5533 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:09:42.804150    5533 fix.go:54] fixHost starting: 
	I0721 17:09:42.804266    5533 fix.go:112] recreateIfNeeded on kubernetes-upgrade-140000: state=Stopped err=<nil>
	W0721 17:09:42.804275    5533 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:09:42.812547    5533 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-140000" ...
	I0721 17:09:42.816409    5533 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:09:42.816444    5533 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:78:c0:5b:a0:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0721 17:09:42.818557    5533 main.go:141] libmachine: STDOUT: 
	I0721 17:09:42.818574    5533 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:09:42.818620    5533 fix.go:56] duration metric: took 14.469541ms for fixHost
	I0721 17:09:42.818624    5533 start.go:83] releasing machines lock for "kubernetes-upgrade-140000", held for 14.484834ms
	W0721 17:09:42.818631    5533 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:09:42.818670    5533 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:09:42.818674    5533 start.go:729] Will try again in 5 seconds ...
	I0721 17:09:47.820791    5533 start.go:360] acquireMachinesLock for kubernetes-upgrade-140000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:09:47.821416    5533 start.go:364] duration metric: took 471.208µs to acquireMachinesLock for "kubernetes-upgrade-140000"
	I0721 17:09:47.821620    5533 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:09:47.821640    5533 fix.go:54] fixHost starting: 
	I0721 17:09:47.822405    5533 fix.go:112] recreateIfNeeded on kubernetes-upgrade-140000: state=Stopped err=<nil>
	W0721 17:09:47.822432    5533 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:09:47.826897    5533 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-140000" ...
	I0721 17:09:47.833842    5533 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:09:47.834092    5533 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:78:c0:5b:a0:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0721 17:09:47.844127    5533 main.go:141] libmachine: STDOUT: 
	I0721 17:09:47.844187    5533 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:09:47.844268    5533 fix.go:56] duration metric: took 22.629167ms for fixHost
	I0721 17:09:47.844284    5533 start.go:83] releasing machines lock for "kubernetes-upgrade-140000", held for 22.807375ms
	W0721 17:09:47.844486    5533 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-140000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-140000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:09:47.852783    5533 out.go:177] 
	W0721 17:09:47.856909    5533 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:09:47.856942    5533 out.go:239] * 
	* 
	W0721 17:09:47.858711    5533 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:09:47.867837    5533 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-140000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-140000 version --output=json: exit status 1 (58.076833ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-140000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-21 17:09:47.940706 -0700 PDT m=+2758.504075834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-140000 -n kubernetes-upgrade-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-140000 -n kubernetes-upgrade-140000: exit status 7 (31.259042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-140000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-140000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-140000
--- FAIL: TestKubernetesUpgrade (18.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2128967780/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.71s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.31s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4178029833/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (580.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1396918652 start -p stopped-upgrade-930000 --memory=2200 --vm-driver=qemu2 
E0721 17:10:18.999395    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1396918652 start -p stopped-upgrade-930000 --memory=2200 --vm-driver=qemu2 : (45.826482667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1396918652 -p stopped-upgrade-930000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1396918652 -p stopped-upgrade-930000 stop: (12.117231s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-930000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0721 17:13:09.294022    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 17:13:22.062681    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 17:15:18.991031    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-930000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.817137083s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-930000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-930000" primary control-plane node in "stopped-upgrade-930000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-930000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:10:46.986520    5580 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:10:46.986692    5580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:10:46.986696    5580 out.go:304] Setting ErrFile to fd 2...
	I0721 17:10:46.986698    5580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:10:46.986863    5580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:10:46.988078    5580 out.go:298] Setting JSON to false
	I0721 17:10:47.006696    5580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4209,"bootTime":1721602837,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:10:47.006760    5580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:10:47.011441    5580 out.go:177] * [stopped-upgrade-930000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:10:47.019428    5580 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:10:47.019469    5580 notify.go:220] Checking for updates...
	I0721 17:10:47.026387    5580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:10:47.029376    5580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:10:47.032403    5580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:10:47.035409    5580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:10:47.036713    5580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:10:47.039709    5580 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:10:47.043320    5580 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0721 17:10:47.046413    5580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:10:47.050370    5580 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:10:47.057379    5580 start.go:297] selected driver: qemu2
	I0721 17:10:47.057388    5580 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:10:47.057431    5580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:10:47.060012    5580 cni.go:84] Creating CNI manager for ""
	I0721 17:10:47.060031    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:10:47.060050    5580 start.go:340] cluster config:
	{Name:stopped-upgrade-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:10:47.060105    5580 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:10:47.067400    5580 out.go:177] * Starting "stopped-upgrade-930000" primary control-plane node in "stopped-upgrade-930000" cluster
	I0721 17:10:47.071346    5580 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0721 17:10:47.071358    5580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0721 17:10:47.071364    5580 cache.go:56] Caching tarball of preloaded images
	I0721 17:10:47.071419    5580 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:10:47.071424    5580 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0721 17:10:47.071484    5580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/config.json ...
	I0721 17:10:47.071788    5580 start.go:360] acquireMachinesLock for stopped-upgrade-930000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:10:47.071820    5580 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "stopped-upgrade-930000"
	I0721 17:10:47.071828    5580 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:10:47.071833    5580 fix.go:54] fixHost starting: 
	I0721 17:10:47.071931    5580 fix.go:112] recreateIfNeeded on stopped-upgrade-930000: state=Stopped err=<nil>
	W0721 17:10:47.071938    5580 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:10:47.076382    5580 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-930000" ...
	I0721 17:10:47.084351    5580 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:10:47.084413    5580 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50452-:22,hostfwd=tcp::50453-:2376,hostname=stopped-upgrade-930000 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/disk.qcow2
	I0721 17:10:47.131330    5580 main.go:141] libmachine: STDOUT: 
	I0721 17:10:47.131359    5580 main.go:141] libmachine: STDERR: 
	I0721 17:10:47.131371    5580 main.go:141] libmachine: Waiting for VM to start (ssh -p 50452 docker@127.0.0.1)...
	I0721 17:11:06.992145    5580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/config.json ...
	I0721 17:11:06.992757    5580 machine.go:94] provisionDockerMachine start ...
	I0721 17:11:06.992951    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:06.993377    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:06.993389    5580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 17:11:07.076225    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0721 17:11:07.076257    5580 buildroot.go:166] provisioning hostname "stopped-upgrade-930000"
	I0721 17:11:07.076387    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.076652    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.076663    5580 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-930000 && echo "stopped-upgrade-930000" | sudo tee /etc/hostname
	I0721 17:11:07.150582    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-930000
	
	I0721 17:11:07.150640    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.150797    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.150807    5580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-930000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-930000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-930000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 17:11:07.213572    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 17:11:07.213585    5580 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1409/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1409/.minikube}
	I0721 17:11:07.213601    5580 buildroot.go:174] setting up certificates
	I0721 17:11:07.213606    5580 provision.go:84] configureAuth start
	I0721 17:11:07.213610    5580 provision.go:143] copyHostCerts
	I0721 17:11:07.213700    5580 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem, removing ...
	I0721 17:11:07.213710    5580 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem
	I0721 17:11:07.213830    5580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.pem (1078 bytes)
	I0721 17:11:07.214016    5580 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem, removing ...
	I0721 17:11:07.214020    5580 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem
	I0721 17:11:07.214074    5580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/cert.pem (1123 bytes)
	I0721 17:11:07.214184    5580 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem, removing ...
	I0721 17:11:07.214187    5580 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem
	I0721 17:11:07.214233    5580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1409/.minikube/key.pem (1675 bytes)
	I0721 17:11:07.214323    5580 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-930000 san=[127.0.0.1 localhost minikube stopped-upgrade-930000]
	I0721 17:11:07.324288    5580 provision.go:177] copyRemoteCerts
	I0721 17:11:07.324323    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 17:11:07.324331    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:11:07.359832    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 17:11:07.366770    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0721 17:11:07.373513    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 17:11:07.380731    5580 provision.go:87] duration metric: took 167.12475ms to configureAuth
	I0721 17:11:07.380740    5580 buildroot.go:189] setting minikube options for container-runtime
	I0721 17:11:07.380852    5580 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:11:07.380893    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.380978    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.380983    5580 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 17:11:07.446426    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 17:11:07.446435    5580 buildroot.go:70] root file system type: tmpfs
	I0721 17:11:07.446487    5580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 17:11:07.446540    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.446646    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.446681    5580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 17:11:07.514958    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 17:11:07.515016    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.515143    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.515162    5580 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 17:11:07.851418    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0721 17:11:07.851433    5580 machine.go:97] duration metric: took 858.691042ms to provisionDockerMachine
	I0721 17:11:07.851439    5580 start.go:293] postStartSetup for "stopped-upgrade-930000" (driver="qemu2")
	I0721 17:11:07.851446    5580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 17:11:07.851505    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 17:11:07.851518    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:11:07.884356    5580 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 17:11:07.885656    5580 info.go:137] Remote host: Buildroot 2021.02.12
	I0721 17:11:07.885664    5580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1409/.minikube/addons for local assets ...
	I0721 17:11:07.885744    5580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1409/.minikube/files for local assets ...
	I0721 17:11:07.885865    5580 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem -> 19112.pem in /etc/ssl/certs
	I0721 17:11:07.885989    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 17:11:07.889055    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem --> /etc/ssl/certs/19112.pem (1708 bytes)
	I0721 17:11:07.896083    5580 start.go:296] duration metric: took 44.640459ms for postStartSetup
	I0721 17:11:07.896096    5580 fix.go:56] duration metric: took 20.82484075s for fixHost
	I0721 17:11:07.896129    5580 main.go:141] libmachine: Using SSH client type: native
	I0721 17:11:07.896236    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100686a10] 0x100689270 <nil>  [] 0s} localhost 50452 <nil> <nil>}
	I0721 17:11:07.896241    5580 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0721 17:11:07.958536    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721607068.069196796
	
	I0721 17:11:07.958547    5580 fix.go:216] guest clock: 1721607068.069196796
	I0721 17:11:07.958551    5580 fix.go:229] Guest: 2024-07-21 17:11:08.069196796 -0700 PDT Remote: 2024-07-21 17:11:07.896098 -0700 PDT m=+20.938203001 (delta=173.098796ms)
	I0721 17:11:07.958564    5580 fix.go:200] guest clock delta is within tolerance: 173.098796ms
	I0721 17:11:07.958568    5580 start.go:83] releasing machines lock for "stopped-upgrade-930000", held for 20.887321041s
	I0721 17:11:07.958627    5580 ssh_runner.go:195] Run: cat /version.json
	I0721 17:11:07.958636    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:11:07.958646    5580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 17:11:07.958664    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	W0721 17:11:07.959187    5580 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50572->127.0.0.1:50452: write: connection reset by peer
	I0721 17:11:07.959205    5580 retry.go:31] will retry after 369.011209ms: ssh: handshake failed: write tcp 127.0.0.1:50572->127.0.0.1:50452: write: connection reset by peer
	W0721 17:11:08.385143    5580 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0721 17:11:08.385250    5580 ssh_runner.go:195] Run: systemctl --version
	I0721 17:11:08.388212    5580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 17:11:08.391706    5580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 17:11:08.391766    5580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0721 17:11:08.396856    5580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0721 17:11:08.414966    5580 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 17:11:08.414984    5580 start.go:495] detecting cgroup driver to use...
	I0721 17:11:08.415074    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 17:11:08.421510    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0721 17:11:08.424696    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 17:11:08.428118    5580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 17:11:08.428141    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 17:11:08.431020    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 17:11:08.433860    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0721 17:11:08.437376    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 17:11:08.440717    5580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 17:11:08.444214    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 17:11:08.447168    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 17:11:08.449961    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 17:11:08.453176    5580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 17:11:08.456378    5580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 17:11:08.459309    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:08.545196    5580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 17:11:08.551626    5580 start.go:495] detecting cgroup driver to use...
	I0721 17:11:08.551686    5580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 17:11:08.560910    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 17:11:08.565638    5580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 17:11:08.572303    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 17:11:08.576841    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 17:11:08.581271    5580 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0721 17:11:08.637205    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 17:11:08.642446    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 17:11:08.647795    5580 ssh_runner.go:195] Run: which cri-dockerd
	I0721 17:11:08.649116    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 17:11:08.652168    5580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 17:11:08.657185    5580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 17:11:08.726343    5580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 17:11:08.790401    5580 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 17:11:08.790467    5580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 17:11:08.795460    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:08.876288    5580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 17:11:10.034123    5580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157848792s)
	I0721 17:11:10.034191    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0721 17:11:10.039145    5580 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0721 17:11:10.045123    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0721 17:11:10.050442    5580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0721 17:11:10.110666    5580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0721 17:11:10.176496    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:10.240155    5580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0721 17:11:10.245522    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0721 17:11:10.250388    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:10.308185    5580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0721 17:11:10.346443    5580 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0721 17:11:10.346525    5580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0721 17:11:10.350536    5580 start.go:563] Will wait 60s for crictl version
	I0721 17:11:10.350599    5580 ssh_runner.go:195] Run: which crictl
	I0721 17:11:10.351901    5580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 17:11:10.366463    5580 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0721 17:11:10.366532    5580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0721 17:11:10.382435    5580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0721 17:11:10.407277    5580 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0721 17:11:10.407341    5580 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0721 17:11:10.408600    5580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 17:11:10.412216    5580 kubeadm.go:883] updating cluster {Name:stopped-upgrade-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0721 17:11:10.412260    5580 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0721 17:11:10.412298    5580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0721 17:11:10.422477    5580 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0721 17:11:10.422485    5580 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0721 17:11:10.422530    5580 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0721 17:11:10.425692    5580 ssh_runner.go:195] Run: which lz4
	I0721 17:11:10.427081    5580 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0721 17:11:10.428261    5580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0721 17:11:10.428270    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0721 17:11:11.374936    5580 docker.go:649] duration metric: took 947.911125ms to copy over tarball
	I0721 17:11:11.374991    5580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0721 17:11:12.537974    5580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.163001791s)
	I0721 17:11:12.537991    5580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0721 17:11:12.553879    5580 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0721 17:11:12.557246    5580 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0721 17:11:12.562623    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:12.623240    5580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 17:11:14.320488    5580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.697277083s)
	I0721 17:11:14.320592    5580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0721 17:11:14.333236    5580 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0721 17:11:14.333244    5580 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0721 17:11:14.333250    5580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0721 17:11:14.337495    5580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:14.339584    5580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:14.342008    5580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:14.342105    5580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:14.343499    5580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:14.343568    5580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:14.345183    5580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:14.345194    5580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:14.346753    5580 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:14.346768    5580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:14.348087    5580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:14.348114    5580 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0721 17:11:14.349868    5580 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:14.349924    5580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:14.350715    5580 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0721 17:11:14.352293    5580 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:16.557896    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:16.596146    5580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0721 17:11:16.596207    5580 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:16.596320    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0721 17:11:16.617309    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0721 17:11:16.647262    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:16.665226    5580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0721 17:11:16.665247    5580 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:16.665305    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0721 17:11:16.678355    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0721 17:11:16.701093    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:16.711795    5580 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0721 17:11:16.711814    5580 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:16.711873    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0721 17:11:16.722029    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0721 17:11:16.723404    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:16.733683    5580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0721 17:11:16.733703    5580 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:16.733763    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0721 17:11:16.744347    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0721 17:11:17.239912    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0721 17:11:17.248644    5580 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0721 17:11:17.248801    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:17.259682    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:17.261034    5580 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0721 17:11:17.261053    5580 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0721 17:11:17.261089    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0721 17:11:17.286561    5580 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0721 17:11:17.286585    5580 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:17.286648    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0721 17:11:17.288457    5580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0721 17:11:17.288470    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0721 17:11:17.288472    5580 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:17.288507    5580 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0721 17:11:17.288568    5580 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0721 17:11:17.299760    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0721 17:11:17.299887    5580 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	W0721 17:11:17.300488    5580 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0721 17:11:17.300582    5580 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:17.302860    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0721 17:11:17.302870    5580 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0721 17:11:17.302884    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0721 17:11:17.302897    5580 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0721 17:11:17.302909    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0721 17:11:17.316934    5580 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0721 17:11:17.316957    5580 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:17.317009    5580 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:11:17.323561    5580 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0721 17:11:17.323578    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0721 17:11:17.344619    5580 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0721 17:11:17.344745    5580 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0721 17:11:17.389571    5580 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0721 17:11:17.389592    5580 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0721 17:11:17.389598    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0721 17:11:17.389609    5580 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0721 17:11:17.389634    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0721 17:11:17.450024    5580 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0721 17:11:17.450054    5580 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0721 17:11:17.450060    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0721 17:11:17.683748    5580 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0721 17:11:17.683785    5580 cache_images.go:92] duration metric: took 3.35062225s to LoadCachedImages
	W0721 17:11:17.683830    5580 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0721 17:11:17.683836    5580 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0721 17:11:17.683886    5580 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-930000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 17:11:17.683952    5580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0721 17:11:17.697362    5580 cni.go:84] Creating CNI manager for ""
	I0721 17:11:17.697375    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:11:17.697380    5580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 17:11:17.697389    5580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-930000 NodeName:stopped-upgrade-930000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 17:11:17.697455    5580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-930000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 17:11:17.697503    5580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0721 17:11:17.700979    5580 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 17:11:17.701003    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0721 17:11:17.704157    5580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0721 17:11:17.709383    5580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 17:11:17.714282    5580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0721 17:11:17.719738    5580 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0721 17:11:17.720925    5580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 17:11:17.724553    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:11:17.786414    5580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 17:11:17.792518    5580 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000 for IP: 10.0.2.15
	I0721 17:11:17.792531    5580 certs.go:194] generating shared ca certs ...
	I0721 17:11:17.792539    5580 certs.go:226] acquiring lock for ca certs: {Name:mke4827a2590eed55d39c612acfba4d65d3007ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:17.792703    5580 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.key
	I0721 17:11:17.792755    5580 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.key
	I0721 17:11:17.792760    5580 certs.go:256] generating profile certs ...
	I0721 17:11:17.792833    5580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.key
	I0721 17:11:17.792852    5580 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key.75e49a33
	I0721 17:11:17.792863    5580 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt.75e49a33 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0721 17:11:17.893475    5580 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt.75e49a33 ...
	I0721 17:11:17.893486    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt.75e49a33: {Name:mk79f4899f7306d2c1b64bd6b3b7c05e91307157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:17.893790    5580 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key.75e49a33 ...
	I0721 17:11:17.893795    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key.75e49a33: {Name:mk68413454cdd12cdcb821263e9207a0c1ecc72d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:17.893940    5580 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt.75e49a33 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt
	I0721 17:11:17.894663    5580 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key.75e49a33 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key
	I0721 17:11:17.894857    5580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/proxy-client.key
	I0721 17:11:17.894996    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911.pem (1338 bytes)
	W0721 17:11:17.895026    5580 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911_empty.pem, impossibly tiny 0 bytes
	I0721 17:11:17.895032    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 17:11:17.895059    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem (1078 bytes)
	I0721 17:11:17.895086    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem (1123 bytes)
	I0721 17:11:17.895111    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/key.pem (1675 bytes)
	I0721 17:11:17.895374    5580 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem (1708 bytes)
	I0721 17:11:17.895699    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 17:11:17.902307    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 17:11:17.909122    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 17:11:17.916341    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 17:11:17.923414    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0721 17:11:17.930288    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 17:11:17.936841    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 17:11:17.944279    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 17:11:17.951065    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/1911.pem --> /usr/share/ca-certificates/1911.pem (1338 bytes)
	I0721 17:11:17.957672    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/ssl/certs/19112.pem --> /usr/share/ca-certificates/19112.pem (1708 bytes)
	I0721 17:11:17.964801    5580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 17:11:17.971684    5580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 17:11:17.976773    5580 ssh_runner.go:195] Run: openssl version
	I0721 17:11:17.978775    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19112.pem && ln -fs /usr/share/ca-certificates/19112.pem /etc/ssl/certs/19112.pem"
	I0721 17:11:17.981559    5580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19112.pem
	I0721 17:11:17.982985    5580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:32 /usr/share/ca-certificates/19112.pem
	I0721 17:11:17.983008    5580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19112.pem
	I0721 17:11:17.984634    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19112.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 17:11:17.987367    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 17:11:17.990038    5580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:11:17.991454    5580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:24 /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:11:17.991473    5580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 17:11:17.993094    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 17:11:17.996189    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1911.pem && ln -fs /usr/share/ca-certificates/1911.pem /etc/ssl/certs/1911.pem"
	I0721 17:11:17.998888    5580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1911.pem
	I0721 17:11:18.000208    5580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:32 /usr/share/ca-certificates/1911.pem
	I0721 17:11:18.000227    5580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1911.pem
	I0721 17:11:18.001972    5580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1911.pem /etc/ssl/certs/51391683.0"
	I0721 17:11:18.005206    5580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 17:11:18.006614    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0721 17:11:18.008398    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0721 17:11:18.010235    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0721 17:11:18.012175    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0721 17:11:18.013898    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0721 17:11:18.015746    5580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0721 17:11:18.017512    5580 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0721 17:11:18.017581    5580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0721 17:11:18.027371    5580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0721 17:11:18.030499    5580 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0721 17:11:18.030503    5580 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0721 17:11:18.030525    5580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0721 17:11:18.033536    5580 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0721 17:11:18.033832    5580 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-930000" does not appear in /Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:11:18.033972    5580 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1409/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-930000" cluster setting kubeconfig missing "stopped-upgrade-930000" context setting]
	I0721 17:11:18.034156    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/kubeconfig: {Name:mk941eb06ccb0e2f7fcbae3a7de63e740b813743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:11:18.034605    5580 kapi.go:59] client config for stopped-upgrade-930000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a1b790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 17:11:18.034911    5580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0721 17:11:18.037448    5580 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-930000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0721 17:11:18.037453    5580 kubeadm.go:1160] stopping kube-system containers ...
	I0721 17:11:18.037490    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0721 17:11:18.048101    5580 docker.go:483] Stopping containers: [e507e67410b2 e51ba4e1d673 a5aa61dd685d ea215f4edd83 3b08d4c9ea9d 22353ec24f6d e619eab918db d445b75bd5c3]
	I0721 17:11:18.048162    5580 ssh_runner.go:195] Run: docker stop e507e67410b2 e51ba4e1d673 a5aa61dd685d ea215f4edd83 3b08d4c9ea9d 22353ec24f6d e619eab918db d445b75bd5c3
	I0721 17:11:18.059983    5580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0721 17:11:18.065309    5580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 17:11:18.068518    5580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 17:11:18.068523    5580 kubeadm.go:157] found existing configuration files:
	
	I0721 17:11:18.068546    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf
	I0721 17:11:18.071192    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 17:11:18.071225    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 17:11:18.073650    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf
	I0721 17:11:18.076650    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 17:11:18.076669    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 17:11:18.079459    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf
	I0721 17:11:18.081792    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 17:11:18.081812    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 17:11:18.084779    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf
	I0721 17:11:18.087745    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 17:11:18.087766    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 17:11:18.090134    5580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 17:11:18.093231    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.115504    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.432529    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.562545    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.585724    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0721 17:11:18.617486    5580 api_server.go:52] waiting for apiserver process to appear ...
	I0721 17:11:18.617568    5580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:11:19.119657    5580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:11:19.619574    5580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:11:19.626734    5580 api_server.go:72] duration metric: took 1.009276167s to wait for apiserver process to appear ...
	I0721 17:11:19.626746    5580 api_server.go:88] waiting for apiserver healthz status ...
	I0721 17:11:19.626760    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:24.627281    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:24.627302    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:29.628588    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:29.628631    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:34.629032    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:34.629075    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:39.629428    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:39.629447    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:44.629806    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:44.629842    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:49.630420    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:49.630467    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:54.631641    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:54.631687    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:11:59.632922    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:11:59.632992    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:04.634677    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:04.634709    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:09.636612    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:09.636653    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:14.638765    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:14.638822    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:19.640991    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:19.641141    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:19.659001    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:19.659079    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:19.670103    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:19.670167    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:19.680762    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:19.680823    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:19.690823    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:19.690896    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:19.701439    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:19.701506    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:19.712513    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:19.712594    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:19.728140    5580 logs.go:276] 0 containers: []
	W0721 17:12:19.728158    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:19.728214    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:19.738831    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:19.738852    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:19.738857    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:19.764726    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:19.764736    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:19.778842    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:19.778853    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:19.794734    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:19.794745    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:19.809564    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:19.809576    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:19.820796    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:19.820808    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:19.925030    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:19.925042    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:19.938704    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:19.938715    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:19.952669    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:19.952679    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:19.977260    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:19.977269    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:19.988407    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:19.988416    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:20.000229    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:20.000240    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:20.012293    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:20.012306    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:20.029463    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:20.029473    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:20.040785    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:20.040795    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:20.052937    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:20.052949    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:20.090691    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:20.090700    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:22.594823    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:27.596988    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:27.597239    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:27.618843    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:27.618950    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:27.633089    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:27.633164    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:27.645233    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:27.645304    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:27.655905    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:27.655979    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:27.666105    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:27.666173    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:27.676669    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:27.676740    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:27.686925    5580 logs.go:276] 0 containers: []
	W0721 17:12:27.686936    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:27.686996    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:27.697477    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:27.697498    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:27.697503    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:27.722492    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:27.722503    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:27.737101    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:27.737112    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:27.753983    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:27.753994    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:27.765264    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:27.765279    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:27.804326    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:27.804338    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:27.819133    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:27.819143    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:27.831696    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:27.831709    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:27.845368    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:27.845380    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:27.849604    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:27.849613    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:27.863312    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:27.863322    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:27.875550    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:27.875561    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:27.901526    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:27.901535    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:27.939431    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:27.939438    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:27.953144    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:27.953152    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:27.967584    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:27.967595    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:27.979209    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:27.979222    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:30.491367    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:35.493717    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:35.494025    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:35.526464    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:35.526600    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:35.546676    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:35.546771    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:35.561359    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:35.561449    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:35.574199    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:35.574273    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:35.585120    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:35.585198    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:35.596638    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:35.596709    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:35.607679    5580 logs.go:276] 0 containers: []
	W0721 17:12:35.607691    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:35.607752    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:35.619123    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:35.619141    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:35.619156    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:35.631064    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:35.631074    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:35.670351    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:35.670359    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:35.674774    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:35.674783    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:35.699894    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:35.699906    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:35.712374    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:35.712385    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:35.723413    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:35.723423    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:35.735391    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:35.735401    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:35.771503    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:35.771516    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:35.785216    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:35.785226    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:35.799334    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:35.799346    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:35.814111    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:35.814121    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:35.831646    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:35.831655    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:35.846802    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:35.846811    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:35.872750    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:35.872759    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:35.886136    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:35.886147    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:35.898126    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:35.898137    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:38.412812    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:43.414966    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:43.415117    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:43.426243    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:43.426319    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:43.437000    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:43.437068    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:43.448819    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:43.448889    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:43.459362    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:43.459432    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:43.470255    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:43.470320    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:43.483797    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:43.483869    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:43.493678    5580 logs.go:276] 0 containers: []
	W0721 17:12:43.493687    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:43.493746    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:43.504270    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:43.504288    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:43.504293    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:43.540899    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:43.540908    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:43.555600    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:43.555611    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:43.570598    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:43.570610    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:43.605514    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:43.605525    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:43.617221    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:43.617232    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:43.642086    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:43.642094    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:43.653847    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:43.653857    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:43.658315    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:43.658321    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:43.674267    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:43.674276    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:43.689426    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:43.689438    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:43.706299    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:43.706312    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:43.727473    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:43.727484    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:43.751893    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:43.751903    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:43.765059    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:43.765069    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:43.779726    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:43.779738    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:43.791238    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:43.791250    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:46.310426    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:51.312746    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:51.312959    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:51.327782    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:51.327859    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:51.339747    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:51.339811    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:51.350625    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:51.350690    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:51.366498    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:51.366573    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:51.376876    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:51.376943    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:51.388492    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:51.388562    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:51.398781    5580 logs.go:276] 0 containers: []
	W0721 17:12:51.398793    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:51.398852    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:51.409621    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:51.409639    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:51.409644    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:51.422131    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:51.422143    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:51.468279    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:51.468294    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:51.482794    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:51.482805    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:51.494493    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:51.494504    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:51.506573    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:51.506608    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:12:51.521261    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:51.521274    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:51.559806    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:51.559818    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:51.574450    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:51.574464    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:51.599801    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:51.599811    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:51.614420    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:51.614431    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:51.618948    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:51.618956    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:51.629699    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:51.629711    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:51.641007    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:51.641017    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:51.654233    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:51.654245    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:51.666818    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:51.666830    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:51.683768    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:51.683779    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:54.209346    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:12:59.211489    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:12:59.211687    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:12:59.230985    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:12:59.231080    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:12:59.245272    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:12:59.245344    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:12:59.261194    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:12:59.261261    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:12:59.271751    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:12:59.271822    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:12:59.282338    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:12:59.282407    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:12:59.295588    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:12:59.295648    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:12:59.305827    5580 logs.go:276] 0 containers: []
	W0721 17:12:59.305840    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:12:59.305888    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:12:59.316527    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:12:59.316544    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:12:59.316549    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:12:59.320884    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:12:59.320893    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:12:59.332814    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:12:59.332824    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:12:59.344192    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:12:59.344206    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:12:59.369384    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:12:59.369394    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:12:59.383509    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:12:59.383519    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:12:59.408245    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:12:59.408258    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:12:59.419874    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:12:59.419886    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:12:59.434577    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:12:59.434588    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:12:59.471118    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:12:59.471126    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:12:59.506092    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:12:59.506104    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:12:59.520160    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:12:59.520172    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:12:59.532391    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:12:59.532401    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:12:59.549980    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:12:59.549992    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:12:59.572728    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:12:59.572741    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:12:59.584119    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:12:59.584130    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:12:59.598842    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:12:59.598852    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:02.114715    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:07.117187    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:07.117428    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:07.141579    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:07.141695    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:07.158149    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:07.158230    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:07.171218    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:07.171290    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:07.181888    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:07.181958    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:07.195229    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:07.195298    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:07.205831    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:07.205903    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:07.216355    5580 logs.go:276] 0 containers: []
	W0721 17:13:07.216367    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:07.216430    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:07.227077    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:07.227094    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:07.227098    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:07.241094    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:07.241103    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:07.252652    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:07.252663    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:07.266262    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:07.266273    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:07.291476    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:07.291487    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:07.303719    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:07.303729    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:07.307809    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:07.307818    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:07.331872    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:07.331883    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:07.350310    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:07.350320    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:07.361620    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:07.361630    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:07.396789    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:07.396801    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:07.408053    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:07.408064    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:07.421614    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:07.421625    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:07.435743    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:07.435752    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:07.464642    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:07.464653    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:07.483489    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:07.483499    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:07.502156    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:07.502172    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:10.043541    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:15.045790    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:15.045960    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:15.059193    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:15.059294    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:15.074138    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:15.074236    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:15.084959    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:15.085031    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:15.095085    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:15.095151    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:15.105695    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:15.105772    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:15.117199    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:15.117287    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:15.129012    5580 logs.go:276] 0 containers: []
	W0721 17:13:15.129026    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:15.129098    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:15.140886    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:15.140905    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:15.140913    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:15.177067    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:15.177079    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:15.188897    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:15.188909    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:15.202662    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:15.202673    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:15.213810    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:15.213822    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:15.228216    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:15.228227    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:15.239747    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:15.239759    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:15.253797    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:15.253807    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:15.271882    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:15.271893    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:15.283740    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:15.283750    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:15.323509    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:15.323521    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:15.327844    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:15.327853    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:15.338874    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:15.338887    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:15.353650    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:15.353661    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:15.378498    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:15.378505    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:15.390114    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:15.390125    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:15.414173    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:15.414183    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:17.930071    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:22.932626    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:22.932923    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:22.959563    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:22.959691    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:22.978897    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:22.978975    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:22.992279    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:22.992356    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:23.003067    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:23.003137    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:23.013828    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:23.013895    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:23.024824    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:23.024896    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:23.035819    5580 logs.go:276] 0 containers: []
	W0721 17:13:23.035831    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:23.035892    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:23.046155    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:23.046173    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:23.046179    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:23.050683    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:23.050690    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:23.087700    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:23.087711    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:23.113325    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:23.113335    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:23.127197    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:23.127211    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:23.139646    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:23.139657    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:23.165720    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:23.165731    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:23.184454    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:23.184465    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:23.197703    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:23.197717    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:23.209181    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:23.209197    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:23.223323    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:23.223332    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:23.240001    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:23.240013    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:23.252944    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:23.252957    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:23.290828    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:23.290838    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:23.307121    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:23.307133    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:23.323541    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:23.323551    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:23.341777    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:23.341788    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:25.857956    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:30.860363    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:30.860548    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:30.873706    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:30.873783    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:30.884364    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:30.884440    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:30.895087    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:30.895156    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:30.905524    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:30.905603    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:30.915816    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:30.915888    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:30.926390    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:30.926451    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:30.935863    5580 logs.go:276] 0 containers: []
	W0721 17:13:30.935874    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:30.935931    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:30.946236    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:30.946252    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:30.946258    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:30.962642    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:30.962653    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:30.987625    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:30.987636    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:31.026267    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:31.026275    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:31.030794    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:31.030803    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:31.042321    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:31.042331    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:31.053478    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:31.053490    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:31.067589    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:31.067603    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:31.082336    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:31.082347    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:31.110886    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:31.110897    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:31.125134    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:31.125145    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:31.138101    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:31.138112    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:31.155276    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:31.155287    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:31.169189    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:31.169199    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:31.181608    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:31.181619    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:31.193710    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:31.193723    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:31.229189    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:31.229200    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:33.745304    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:38.746183    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:38.746340    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:38.759047    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:38.759118    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:38.770214    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:38.770284    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:38.780718    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:38.780796    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:38.791234    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:38.791301    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:38.801046    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:38.801115    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:38.811354    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:38.811422    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:38.821542    5580 logs.go:276] 0 containers: []
	W0721 17:13:38.821555    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:38.821618    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:38.831867    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:38.831887    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:38.831892    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:38.836411    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:38.836420    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:38.871447    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:38.871460    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:38.882718    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:38.882729    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:38.900764    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:38.900775    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:38.919394    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:38.919404    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:38.931633    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:38.931645    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:38.943246    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:38.943257    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:38.968371    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:38.968383    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:38.982915    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:38.982928    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:38.995167    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:38.995179    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:39.007342    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:39.007353    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:39.047311    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:39.047320    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:39.061765    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:39.061776    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:39.080647    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:39.080658    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:39.101494    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:39.101504    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:39.125695    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:39.125703    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:41.638991    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:46.641212    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:46.641432    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:46.664253    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:46.664376    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:46.679733    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:46.679816    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:46.692186    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:46.692259    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:46.703549    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:46.703627    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:46.720386    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:46.720451    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:46.730988    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:46.731063    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:46.741782    5580 logs.go:276] 0 containers: []
	W0721 17:13:46.741793    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:46.741851    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:46.752146    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:46.752166    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:46.752171    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:46.770301    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:46.770313    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:46.783350    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:46.783364    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:46.797956    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:46.797969    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:46.809151    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:46.809165    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:46.830317    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:46.830329    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:46.844446    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:46.844456    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:46.869242    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:46.869252    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:46.883971    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:46.883979    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:46.907313    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:46.907328    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:46.918640    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:46.918652    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:46.943083    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:46.943091    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:46.947583    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:46.947588    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:46.961411    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:46.961422    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:46.973465    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:46.973481    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:46.985785    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:46.985794    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:47.022139    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:47.022148    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:49.558045    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:13:54.560580    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:13:54.560844    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:13:54.588275    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:13:54.588377    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:13:54.603662    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:13:54.603753    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:13:54.615891    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:13:54.615963    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:13:54.629787    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:13:54.629856    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:13:54.640214    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:13:54.640283    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:13:54.650882    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:13:54.650947    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:13:54.660634    5580 logs.go:276] 0 containers: []
	W0721 17:13:54.660646    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:13:54.660705    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:13:54.670617    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:13:54.670635    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:13:54.670640    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:13:54.681903    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:13:54.681914    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:13:54.699558    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:13:54.699568    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:13:54.704189    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:13:54.704199    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:13:54.740300    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:13:54.740311    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:13:54.754878    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:13:54.754889    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:13:54.778828    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:13:54.778839    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:13:54.796084    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:13:54.796096    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:13:54.807675    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:13:54.807687    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:13:54.819771    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:13:54.819787    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:13:54.844360    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:13:54.844373    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:13:54.855726    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:13:54.855735    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:13:54.880351    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:13:54.880359    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:13:54.894085    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:13:54.894095    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:13:54.930469    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:13:54.930478    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:13:54.944086    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:13:54.944096    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:13:54.956173    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:13:54.956185    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:13:57.470400    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:02.472971    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:02.473340    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:02.508679    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:02.508809    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:02.526434    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:02.526510    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:02.539938    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:02.540015    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:02.551831    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:02.551915    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:02.563231    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:02.563305    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:02.576611    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:02.576680    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:02.587199    5580 logs.go:276] 0 containers: []
	W0721 17:14:02.587213    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:02.587277    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:02.597766    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:02.597785    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:02.597791    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:02.602166    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:02.602173    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:02.626946    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:02.626957    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:02.638837    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:02.638849    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:02.650503    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:02.650513    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:02.666012    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:02.666022    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:02.677983    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:02.677995    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:02.716046    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:02.716054    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:02.730919    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:02.730929    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:02.745242    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:02.745252    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:02.756733    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:02.756745    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:02.771264    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:02.771279    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:02.785748    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:02.785758    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:02.797630    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:02.797640    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:02.814588    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:02.814600    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:02.828027    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:02.828037    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:02.851890    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:02.851900    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:05.391173    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:10.392991    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:10.393086    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:10.404355    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:10.404433    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:10.416078    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:10.416152    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:10.427303    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:10.427366    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:10.438136    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:10.438200    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:10.449297    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:10.449366    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:10.460420    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:10.460497    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:10.471499    5580 logs.go:276] 0 containers: []
	W0721 17:14:10.471510    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:10.471569    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:10.483423    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:10.483443    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:10.483448    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:10.523669    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:10.523683    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:10.539954    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:10.539970    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:10.553220    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:10.553233    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:10.566296    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:10.566308    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:10.608470    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:10.608481    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:10.635083    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:10.635096    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:10.650970    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:10.650985    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:10.663649    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:10.663662    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:10.676904    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:10.676912    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:10.688556    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:10.688566    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:10.700092    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:10.700103    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:10.704463    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:10.704469    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:10.718915    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:10.718928    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:10.736503    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:10.736517    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:10.760492    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:10.760504    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:10.778561    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:10.778574    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:13.292602    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:18.293270    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:18.293452    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:18.308588    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:18.308668    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:18.320142    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:18.320216    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:18.330504    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:18.330573    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:18.341731    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:18.341816    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:18.353114    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:18.353181    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:18.364269    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:18.364342    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:18.374765    5580 logs.go:276] 0 containers: []
	W0721 17:14:18.374780    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:18.374835    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:18.388737    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:18.388755    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:18.388760    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:18.424045    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:18.424056    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:18.438088    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:18.438101    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:18.463211    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:18.463221    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:18.477031    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:18.477043    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:18.488120    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:18.488131    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:18.499354    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:18.499368    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:18.517253    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:18.517264    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:18.521445    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:18.521452    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:18.544029    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:18.544039    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:18.556509    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:18.556520    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:18.573427    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:18.573438    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:18.588419    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:18.588429    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:18.626193    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:18.626206    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:18.638323    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:18.638337    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:18.649731    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:18.649742    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:18.664947    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:18.664959    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:21.185729    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:26.187885    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:26.188023    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:26.203122    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:26.203187    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:26.213825    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:26.213901    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:26.223976    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:26.224042    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:26.234422    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:26.234500    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:26.244780    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:26.244845    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:26.255079    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:26.255158    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:26.265720    5580 logs.go:276] 0 containers: []
	W0721 17:14:26.265731    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:26.265791    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:26.276649    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:26.276667    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:26.276672    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:26.301430    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:26.301442    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:26.313326    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:26.313337    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:26.332804    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:26.332813    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:26.348696    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:26.348708    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:26.362416    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:26.362427    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:26.376922    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:26.376930    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:26.388544    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:26.388559    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:26.426833    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:26.426842    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:26.440028    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:26.440042    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:26.451660    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:26.451671    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:26.463487    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:26.463497    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:26.488032    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:26.488039    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:26.492384    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:26.492393    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:26.506536    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:26.506546    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:26.517275    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:26.517285    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:26.529455    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:26.529465    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:29.064466    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:34.066615    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:34.066980    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:34.101545    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:34.101677    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:34.119901    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:34.119989    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:34.133050    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:34.133124    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:34.146770    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:34.146838    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:34.157958    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:34.158035    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:34.168657    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:34.168722    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:34.179169    5580 logs.go:276] 0 containers: []
	W0721 17:14:34.179184    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:34.179244    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:34.190454    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:34.190474    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:34.190479    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:34.227279    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:34.227289    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:34.231537    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:34.231545    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:34.245765    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:34.245775    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:34.265488    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:34.265500    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:34.290075    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:34.290086    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:34.304645    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:34.304656    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:34.321651    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:34.321663    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:34.357435    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:34.357446    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:34.371737    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:34.371750    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:34.382760    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:34.382772    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:34.394262    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:34.394274    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:34.408919    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:34.408929    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:34.432682    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:34.432690    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:34.448041    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:34.448052    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:34.460451    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:34.460462    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:34.471966    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:34.471978    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:36.985185    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:41.986134    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:41.986455    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:42.022299    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:42.022442    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:42.047163    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:42.047251    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:42.060932    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:42.061007    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:42.072777    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:42.072853    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:42.083626    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:42.083697    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:42.095249    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:42.095323    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:42.106046    5580 logs.go:276] 0 containers: []
	W0721 17:14:42.106058    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:42.106119    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:42.119661    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:42.119680    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:42.119686    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:42.165743    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:42.165765    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:42.171184    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:42.171200    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:42.183008    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:42.183021    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:42.194449    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:42.194460    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:42.219596    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:42.219607    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:42.234117    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:42.234127    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:42.246242    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:42.246251    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:42.257438    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:42.257449    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:42.279460    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:42.279469    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:42.291479    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:42.291490    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:42.330882    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:42.330893    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:42.346502    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:42.346512    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:42.360753    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:42.360762    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:42.375177    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:42.375186    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:42.387220    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:42.387232    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:42.404854    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:42.404865    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:44.920378    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:49.922742    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:49.923197    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:49.961341    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:49.961475    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:49.982610    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:49.982715    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:49.998127    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:49.998203    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:50.011180    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:50.011261    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:50.022549    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:50.022612    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:50.033609    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:50.033680    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:50.043646    5580 logs.go:276] 0 containers: []
	W0721 17:14:50.043659    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:50.043719    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:50.054181    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:50.054198    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:50.054202    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:50.073364    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:50.073375    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:50.088833    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:50.088843    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:50.100666    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:50.100676    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:14:50.123703    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:50.123713    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:50.128046    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:50.128054    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:50.142209    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:50.142219    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:50.166765    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:50.166775    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:50.180009    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:50.180022    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:50.192741    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:50.192752    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:50.204278    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:50.204288    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:50.221651    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:50.221661    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:50.257989    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:50.258000    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:50.293065    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:50.293077    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:50.307560    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:50.307571    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:50.318725    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:50.318736    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:50.332932    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:50.332943    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:52.846626    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:14:57.848940    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:14:57.849279    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:14:57.881750    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:14:57.881879    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:14:57.900622    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:14:57.900712    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:14:57.915059    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:14:57.915126    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:14:57.926977    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:14:57.927058    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:14:57.938189    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:14:57.938262    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:14:57.948652    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:14:57.948723    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:14:57.959432    5580 logs.go:276] 0 containers: []
	W0721 17:14:57.959443    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:14:57.959504    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:14:57.974306    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:14:57.974323    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:14:57.974328    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:14:58.012912    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:14:58.012922    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:14:58.052031    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:14:58.052041    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:14:58.064372    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:14:58.064384    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:14:58.078498    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:14:58.078509    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:14:58.092098    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:14:58.092108    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:14:58.103502    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:14:58.103513    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:14:58.114663    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:14:58.114675    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:14:58.119359    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:14:58.119367    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:14:58.149429    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:14:58.149444    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:14:58.164256    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:14:58.164268    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:14:58.176261    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:14:58.176276    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:14:58.194566    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:14:58.194580    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:14:58.210701    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:14:58.210718    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:14:58.224690    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:14:58.224700    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:14:58.244279    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:14:58.244288    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:14:58.258098    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:14:58.258109    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:00.783855    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:05.786089    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:05.786296    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:05.799726    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:15:05.799810    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:05.811526    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:15:05.811598    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:05.821933    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:15:05.822001    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:05.832155    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:15:05.832234    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:05.842293    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:15:05.842362    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:05.853219    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:15:05.853289    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:05.863773    5580 logs.go:276] 0 containers: []
	W0721 17:15:05.863788    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:05.863847    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:05.874356    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:15:05.874373    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:15:05.874379    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:15:05.899067    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:15:05.899079    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:15:05.913723    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:15:05.913735    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:15:05.927246    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:15:05.927260    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:15:05.938919    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:15:05.938933    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:15:05.953022    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:15:05.953036    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:15:05.968065    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:15:05.968075    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:15:05.980300    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:05.980313    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:15:06.018636    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:06.018648    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:06.023147    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:15:06.023154    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:15:06.036980    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:15:06.036993    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:15:06.048577    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:15:06.048589    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:15:06.062785    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:06.062797    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:06.088476    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:15:06.088484    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:06.099953    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:06.099965    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:06.133746    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:15:06.133760    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:15:06.148827    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:15:06.148838    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:15:08.668369    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:13.670437    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:13.670587    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:15:13.684011    5580 logs.go:276] 2 containers: [8cd6607d618e a5aa61dd685d]
	I0721 17:15:13.684099    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:15:13.694649    5580 logs.go:276] 2 containers: [8e10038fd010 22353ec24f6d]
	I0721 17:15:13.694712    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:15:13.710397    5580 logs.go:276] 1 containers: [d5841987f9f6]
	I0721 17:15:13.710464    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:15:13.724019    5580 logs.go:276] 2 containers: [bdbc0e657649 3b08d4c9ea9d]
	I0721 17:15:13.724085    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:15:13.734510    5580 logs.go:276] 1 containers: [efdf38bf49a9]
	I0721 17:15:13.734604    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:15:13.745461    5580 logs.go:276] 2 containers: [84f74ffb0ce0 e507e67410b2]
	I0721 17:15:13.745520    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:15:13.755524    5580 logs.go:276] 0 containers: []
	W0721 17:15:13.755536    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:15:13.755596    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:15:13.766218    5580 logs.go:276] 2 containers: [44f2a3898ee9 05bd3ff61e18]
	I0721 17:15:13.766239    5580 logs.go:123] Gathering logs for kube-scheduler [bdbc0e657649] ...
	I0721 17:15:13.766244    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdbc0e657649"
	I0721 17:15:13.777856    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:15:13.777867    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:15:13.800857    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:15:13.800865    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:15:13.812200    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:15:13.812214    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:15:13.850312    5580 logs.go:123] Gathering logs for etcd [8e10038fd010] ...
	I0721 17:15:13.850321    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e10038fd010"
	I0721 17:15:13.864401    5580 logs.go:123] Gathering logs for kube-proxy [efdf38bf49a9] ...
	I0721 17:15:13.864415    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efdf38bf49a9"
	I0721 17:15:13.883175    5580 logs.go:123] Gathering logs for kube-controller-manager [84f74ffb0ce0] ...
	I0721 17:15:13.883187    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84f74ffb0ce0"
	I0721 17:15:13.913283    5580 logs.go:123] Gathering logs for kube-controller-manager [e507e67410b2] ...
	I0721 17:15:13.913295    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e507e67410b2"
	I0721 17:15:13.927388    5580 logs.go:123] Gathering logs for storage-provisioner [44f2a3898ee9] ...
	I0721 17:15:13.927398    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44f2a3898ee9"
	I0721 17:15:13.938685    5580 logs.go:123] Gathering logs for storage-provisioner [05bd3ff61e18] ...
	I0721 17:15:13.938695    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd3ff61e18"
	I0721 17:15:13.950007    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:15:13.950019    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:15:13.954546    5580 logs.go:123] Gathering logs for etcd [22353ec24f6d] ...
	I0721 17:15:13.954554    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22353ec24f6d"
	I0721 17:15:13.969139    5580 logs.go:123] Gathering logs for coredns [d5841987f9f6] ...
	I0721 17:15:13.969150    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5841987f9f6"
	I0721 17:15:13.980648    5580 logs.go:123] Gathering logs for kube-scheduler [3b08d4c9ea9d] ...
	I0721 17:15:13.980662    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b08d4c9ea9d"
	I0721 17:15:13.995293    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:15:13.995302    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:15:14.031547    5580 logs.go:123] Gathering logs for kube-apiserver [8cd6607d618e] ...
	I0721 17:15:14.031559    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd6607d618e"
	I0721 17:15:14.045610    5580 logs.go:123] Gathering logs for kube-apiserver [a5aa61dd685d] ...
	I0721 17:15:14.045620    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5aa61dd685d"
	I0721 17:15:16.572407    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:21.572610    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:21.572684    5580 kubeadm.go:597] duration metric: took 4m3.548922666s to restartPrimaryControlPlane
	W0721 17:15:21.572719    5580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0721 17:15:21.572733    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0721 17:15:22.615001    5580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.042283167s)
	I0721 17:15:22.615358    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 17:15:22.620318    5580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 17:15:22.623215    5580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 17:15:22.625968    5580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 17:15:22.625975    5580 kubeadm.go:157] found existing configuration files:
	
	I0721 17:15:22.625997    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf
	I0721 17:15:22.628644    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 17:15:22.628666    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 17:15:22.631554    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf
	I0721 17:15:22.634597    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 17:15:22.634619    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 17:15:22.638129    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf
	I0721 17:15:22.640986    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 17:15:22.641005    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 17:15:22.643628    5580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf
	I0721 17:15:22.646384    5580 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 17:15:22.646406    5580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 17:15:22.649304    5580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 17:15:22.667223    5580 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0721 17:15:22.667296    5580 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 17:15:22.720039    5580 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 17:15:22.720098    5580 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 17:15:22.720162    5580 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 17:15:22.768966    5580 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 17:15:22.772191    5580 out.go:204]   - Generating certificates and keys ...
	I0721 17:15:22.772229    5580 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 17:15:22.772265    5580 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 17:15:22.772319    5580 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0721 17:15:22.772471    5580 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0721 17:15:22.772512    5580 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0721 17:15:22.772540    5580 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0721 17:15:22.772570    5580 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0721 17:15:22.772602    5580 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0721 17:15:22.772666    5580 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0721 17:15:22.772721    5580 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0721 17:15:22.772752    5580 kubeadm.go:310] [certs] Using the existing "sa" key
	I0721 17:15:22.772782    5580 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 17:15:22.858685    5580 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 17:15:22.921503    5580 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 17:15:22.969918    5580 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 17:15:23.125124    5580 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 17:15:23.153447    5580 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 17:15:23.153831    5580 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 17:15:23.153877    5580 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 17:15:23.239147    5580 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 17:15:23.246303    5580 out.go:204]   - Booting up control plane ...
	I0721 17:15:23.246444    5580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 17:15:23.246488    5580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 17:15:23.246518    5580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 17:15:23.246584    5580 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 17:15:23.246707    5580 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0721 17:15:27.241025    5580 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002662 seconds
	I0721 17:15:27.241106    5580 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 17:15:27.246144    5580 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 17:15:27.756147    5580 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 17:15:27.756347    5580 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-930000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 17:15:28.261093    5580 kubeadm.go:310] [bootstrap-token] Using token: twdtae.3ljsgcwo9tgeaxu2
	I0721 17:15:28.267287    5580 out.go:204]   - Configuring RBAC rules ...
	I0721 17:15:28.267351    5580 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 17:15:28.267407    5580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 17:15:28.269476    5580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 17:15:28.274119    5580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 17:15:28.275180    5580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 17:15:28.276008    5580 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 17:15:28.289820    5580 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 17:15:28.413582    5580 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 17:15:28.665207    5580 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 17:15:28.665679    5580 kubeadm.go:310] 
	I0721 17:15:28.665710    5580 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 17:15:28.665715    5580 kubeadm.go:310] 
	I0721 17:15:28.665759    5580 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 17:15:28.665763    5580 kubeadm.go:310] 
	I0721 17:15:28.665785    5580 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 17:15:28.665831    5580 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 17:15:28.665867    5580 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 17:15:28.665873    5580 kubeadm.go:310] 
	I0721 17:15:28.665901    5580 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 17:15:28.665904    5580 kubeadm.go:310] 
	I0721 17:15:28.665937    5580 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 17:15:28.665940    5580 kubeadm.go:310] 
	I0721 17:15:28.665973    5580 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 17:15:28.666019    5580 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 17:15:28.666075    5580 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 17:15:28.666082    5580 kubeadm.go:310] 
	I0721 17:15:28.666135    5580 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 17:15:28.666181    5580 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 17:15:28.666184    5580 kubeadm.go:310] 
	I0721 17:15:28.666232    5580 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token twdtae.3ljsgcwo9tgeaxu2 \
	I0721 17:15:28.666303    5580 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:75e342b31cd1ca4bd3abd7fd07b163bfb3e06809b400a3ad400761b302299515 \
	I0721 17:15:28.666319    5580 kubeadm.go:310] 	--control-plane 
	I0721 17:15:28.666324    5580 kubeadm.go:310] 
	I0721 17:15:28.666385    5580 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 17:15:28.666388    5580 kubeadm.go:310] 
	I0721 17:15:28.666430    5580 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token twdtae.3ljsgcwo9tgeaxu2 \
	I0721 17:15:28.666490    5580 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:75e342b31cd1ca4bd3abd7fd07b163bfb3e06809b400a3ad400761b302299515 
	I0721 17:15:28.666677    5580 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 17:15:28.666686    5580 cni.go:84] Creating CNI manager for ""
	I0721 17:15:28.666696    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:15:28.671061    5580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0721 17:15:28.679020    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0721 17:15:28.682340    5580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0721 17:15:28.688223    5580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 17:15:28.688266    5580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 17:15:28.688288    5580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-930000 minikube.k8s.io/updated_at=2024_07_21T17_15_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=stopped-upgrade-930000 minikube.k8s.io/primary=true
	I0721 17:15:28.726229    5580 kubeadm.go:1113] duration metric: took 37.99475ms to wait for elevateKubeSystemPrivileges
	I0721 17:15:28.726244    5580 ops.go:34] apiserver oom_adj: -16
	I0721 17:15:28.726248    5580 kubeadm.go:394] duration metric: took 4m10.715684s to StartCluster
	I0721 17:15:28.726258    5580 settings.go:142] acquiring lock: {Name:mk7831d6c033f56ef11530d08a44142aeaa86fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:15:28.726348    5580 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:15:28.726756    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/kubeconfig: {Name:mk941eb06ccb0e2f7fcbae3a7de63e740b813743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:15:28.726945    5580 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:15:28.726983    5580 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0721 17:15:28.727032    5580 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:15:28.727041    5580 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-930000"
	I0721 17:15:28.727057    5580 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-930000"
	W0721 17:15:28.727060    5580 addons.go:243] addon storage-provisioner should already be in state true
	I0721 17:15:28.727065    5580 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-930000"
	I0721 17:15:28.727073    5580 host.go:66] Checking if "stopped-upgrade-930000" exists ...
	I0721 17:15:28.727077    5580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-930000"
	I0721 17:15:28.727488    5580 retry.go:31] will retry after 566.889145ms: connect: dial unix /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/monitor: connect: connection refused
	I0721 17:15:28.728249    5580 kapi.go:59] client config for stopped-upgrade-930000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/stopped-upgrade-930000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a1b790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 17:15:28.728372    5580 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-930000"
	W0721 17:15:28.728377    5580 addons.go:243] addon default-storageclass should already be in state true
	I0721 17:15:28.728383    5580 host.go:66] Checking if "stopped-upgrade-930000" exists ...
	I0721 17:15:28.728914    5580 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 17:15:28.728919    5580 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 17:15:28.728924    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:15:28.730973    5580 out.go:177] * Verifying Kubernetes components...
	I0721 17:15:28.739006    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 17:15:28.824660    5580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 17:15:28.829740    5580 api_server.go:52] waiting for apiserver process to appear ...
	I0721 17:15:28.829799    5580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 17:15:28.831670    5580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 17:15:28.835721    5580 api_server.go:72] duration metric: took 108.767417ms to wait for apiserver process to appear ...
	I0721 17:15:28.835732    5580 api_server.go:88] waiting for apiserver healthz status ...
	I0721 17:15:28.835738    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:29.301221    5580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 17:15:29.305175    5580 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 17:15:29.305183    5580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 17:15:29.305201    5580 sshutil.go:53] new ssh client: &{IP:localhost Port:50452 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/stopped-upgrade-930000/id_rsa Username:docker}
	I0721 17:15:29.339724    5580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 17:15:33.837733    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:33.837779    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:38.838034    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:38.838067    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:43.838320    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:43.838378    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:48.838822    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:48.838878    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:53.839451    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:53.839507    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:15:58.840299    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:15:58.840344    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0721 17:15:59.158376    5580 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0721 17:15:59.161808    5580 out.go:177] * Enabled addons: storage-provisioner
	I0721 17:15:59.171620    5580 addons.go:510] duration metric: took 30.445478125s for enable addons: enabled=[storage-provisioner]
	I0721 17:16:03.841294    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:03.841333    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:08.842620    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:08.842663    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:13.844285    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:13.844309    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:18.846301    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:18.846347    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:23.848462    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:23.848524    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:28.849676    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:28.849852    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:16:28.861606    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:16:28.861679    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:16:28.871975    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:16:28.872045    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:16:28.882384    5580 logs.go:276] 2 containers: [0e6ef086c383 ae732c1007fd]
	I0721 17:16:28.882454    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:16:28.893180    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:16:28.893249    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:16:28.903447    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:16:28.903516    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:16:28.913322    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:16:28.913386    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:16:28.923396    5580 logs.go:276] 0 containers: []
	W0721 17:16:28.923412    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:16:28.923476    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:16:28.933454    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:16:28.933473    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:16:28.933479    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:16:28.948225    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:16:28.948235    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:16:28.962360    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:16:28.962371    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:16:28.974372    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:16:28.974384    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:16:28.986695    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:16:28.986707    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:16:29.011386    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:16:29.011394    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:16:29.023241    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:16:29.023253    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:16:29.058633    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:16:29.058641    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:16:29.093332    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:16:29.093344    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:16:29.115435    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:16:29.115446    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:16:29.126975    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:16:29.126985    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:16:29.144073    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:16:29.144088    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:16:29.148183    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:16:29.148191    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:16:31.668527    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:36.671217    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:36.671584    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:16:36.706059    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:16:36.706174    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:16:36.724557    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:16:36.724647    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:16:36.737418    5580 logs.go:276] 2 containers: [0e6ef086c383 ae732c1007fd]
	I0721 17:16:36.737490    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:16:36.749177    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:16:36.749247    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:16:36.759568    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:16:36.759637    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:16:36.770305    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:16:36.770368    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:16:36.780893    5580 logs.go:276] 0 containers: []
	W0721 17:16:36.780904    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:16:36.780961    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:16:36.791884    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:16:36.791899    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:16:36.791903    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:16:36.803868    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:16:36.803879    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:16:36.839158    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:16:36.839172    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:16:36.853325    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:16:36.853336    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:16:36.864836    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:16:36.864847    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:16:36.876829    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:16:36.876838    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:16:36.894910    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:16:36.894920    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:16:36.906952    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:16:36.906963    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:16:36.931660    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:16:36.931669    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:16:36.966443    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:16:36.966453    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:16:36.970978    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:16:36.970986    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:16:36.985356    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:16:36.985368    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:16:37.000927    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:16:37.000937    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:16:39.514715    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:44.517380    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:44.517630    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:16:44.544306    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:16:44.544425    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:16:44.562130    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:16:44.562225    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:16:44.576241    5580 logs.go:276] 2 containers: [0e6ef086c383 ae732c1007fd]
	I0721 17:16:44.576320    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:16:44.587879    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:16:44.587945    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:16:44.598385    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:16:44.598449    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:16:44.608707    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:16:44.608772    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:16:44.618937    5580 logs.go:276] 0 containers: []
	W0721 17:16:44.618948    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:16:44.619004    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:16:44.629227    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:16:44.629247    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:16:44.629251    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:16:44.634156    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:16:44.634165    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:16:44.648020    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:16:44.648033    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:16:44.662749    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:16:44.662760    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:16:44.674293    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:16:44.674307    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:16:44.690943    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:16:44.690953    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:16:44.702264    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:16:44.702276    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:16:44.726932    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:16:44.726941    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:16:44.761513    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:16:44.761521    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:16:44.800412    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:16:44.800425    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:16:44.814905    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:16:44.814917    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:16:44.826673    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:16:44.826685    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:16:44.838031    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:16:44.838042    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:16:47.351464    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:16:52.354200    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:16:52.354637    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:16:52.394300    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:16:52.394429    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:16:52.416294    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:16:52.416390    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:16:52.431810    5580 logs.go:276] 2 containers: [0e6ef086c383 ae732c1007fd]
	I0721 17:16:52.431888    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:16:52.444366    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:16:52.444431    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:16:52.455584    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:16:52.455649    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:16:52.466412    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:16:52.466479    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:16:52.476646    5580 logs.go:276] 0 containers: []
	W0721 17:16:52.476658    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:16:52.476717    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:16:52.486969    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:16:52.486991    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:16:52.486995    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:16:52.509994    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:16:52.510007    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:16:52.521908    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:16:52.521920    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:16:52.534932    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:16:52.534941    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:16:52.550043    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:16:52.550054    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:16:52.562727    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:16:52.562739    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:16:52.586081    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:16:52.586090    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:16:52.619141    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:16:52.619148    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:16:52.623144    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:16:52.623150    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:16:52.634479    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:16:52.634490    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:16:52.647175    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:16:52.647186    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:16:52.664694    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:16:52.664704    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:16:52.698094    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:16:52.698103    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:16:55.215091    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:17:00.217365    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:17:00.217548    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:17:00.240415    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:17:00.240517    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:17:00.255700    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:17:00.255776    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:17:00.268511    5580 logs.go:276] 2 containers: [0e6ef086c383 ae732c1007fd]
	I0721 17:17:00.268582    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:17:00.280344    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:17:00.280405    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:17:00.290490    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:17:00.290577    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:17:00.301112    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:17:00.301174    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:17:00.311262    5580 logs.go:276] 0 containers: []
	W0721 17:17:00.311275    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:17:00.311329    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:17:00.321386    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:17:00.321403    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:17:00.321407    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:17:00.333533    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:17:00.333545    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:17:00.344969    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:17:00.344979    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:17:00.369675    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:17:00.369685    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:17:00.403898    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:17:00.403906    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:17:00.438549    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:17:00.438561    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:17:00.454915    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:17:00.454926    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:17:00.466360    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:17:00.466373    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:17:00.481280    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:17:00.481292    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:17:00.485908    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:17:00.485917    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:17:00.505235    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:17:00.505246    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:17:00.518301    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:17:00.518314    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:17:00.535676    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:17:00.535686    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:17:03.047450    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:17:08.050073    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:17:08.050510    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:17:08.089691    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:17:08.089818    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:17:08.111560    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:17:08.111654    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:17:08.126369    5580 logs.go:276] 2 containers: [0e6ef086c383 ae732c1007fd]
	I0721 17:17:08.126429    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:17:08.138418    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:17:08.138484    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:17:08.149380    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:17:08.149452    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:17:08.160133    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:17:08.160198    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:17:08.170637    5580 logs.go:276] 0 containers: []
	W0721 17:17:08.170647    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:17:08.170696    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:17:08.184960    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:17:08.184976    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:17:08.184981    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:17:08.197074    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:17:08.197087    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:17:08.216250    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:17:08.216259    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:17:08.228933    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:17:08.228945    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:17:08.264008    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:17:08.264016    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:17:08.268367    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:17:08.268377    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:17:08.303138    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:17:08.303150    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:17:08.315001    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:17:08.315013    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:17:08.329811    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:17:08.329825    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:17:08.344469    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:17:08.344483    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:17:08.358354    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:17:08.358366    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:17:08.374456    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:17:08.374467    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:17:08.385374    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:17:08.385387    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:17:10.911194    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:17:15.913737    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:17:15.914151    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:17:15.953169    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:17:15.953309    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:17:15.975559    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:17:15.975662    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:17:16.001683    5580 logs.go:276] 2 containers: [0e6ef086c383 ae732c1007fd]
	I0721 17:17:16.001755    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:17:16.013267    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:17:16.013336    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:17:16.023848    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:17:16.023916    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:17:16.034564    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:17:16.034631    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:17:16.050486    5580 logs.go:276] 0 containers: []
	W0721 17:17:16.050498    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:17:16.050548    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:17:16.061261    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:17:16.061279    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:17:16.061284    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:17:16.074144    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:17:16.074157    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:17:16.089563    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:17:16.089572    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:17:16.123760    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:17:16.123772    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:17:16.135427    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:17:16.135435    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:17:16.150095    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:17:16.150106    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:17:16.164504    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:17:16.164513    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:17:16.176378    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:17:16.176389    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:17:16.193680    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:17:16.193690    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:17:16.205611    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:17:16.205624    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:17:16.229058    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:17:16.229068    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:17:16.262145    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:17:16.262153    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:17:16.266337    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:17:16.266343    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:17:18.779066    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:17:23.781379    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:17:23.781760    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:17:23.819249    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:17:23.819392    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:17:23.840375    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:17:23.840483    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:17:23.856870    5580 logs.go:276] 2 containers: [0e6ef086c383 ae732c1007fd]
	I0721 17:17:23.856945    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:17:23.869519    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:17:23.869587    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:17:23.884660    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:17:23.884725    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:17:23.895336    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:17:23.895406    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:17:23.906872    5580 logs.go:276] 0 containers: []
	W0721 17:17:23.906885    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:17:23.906942    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:17:23.918776    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:17:23.918791    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:17:23.918795    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:17:23.953490    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:17:23.953501    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:17:23.958129    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:17:23.958138    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:17:23.972482    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:17:23.972495    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:17:23.986683    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:17:23.986692    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:17:23.998901    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:17:23.998912    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:17:24.011377    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:17:24.011390    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:17:24.028738    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:17:24.028748    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:17:24.046201    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:17:24.046213    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:17:24.070287    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:17:24.070297    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:17:24.082543    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:17:24.082553    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:17:24.119871    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:17:24.119882    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:17:24.132285    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:17:24.132300    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:17:26.652316    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:17:31.654656    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:17:31.654978    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:17:31.689316    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:17:31.689475    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:17:31.708462    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:17:31.708561    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:17:31.723799    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:17:31.723875    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:17:31.736393    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:17:31.736453    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:17:31.746545    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:17:31.746605    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:17:31.756969    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:17:31.757042    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:17:31.767176    5580 logs.go:276] 0 containers: []
	W0721 17:17:31.767186    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:17:31.767237    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:17:31.777852    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:17:31.777871    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:17:31.777877    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:17:31.792980    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:17:31.792993    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:17:31.809949    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:17:31.809958    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:17:31.821114    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:17:31.821126    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:17:31.845181    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:17:31.845192    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:17:31.859267    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:17:31.859277    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:17:31.869887    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:17:31.869901    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:17:31.880795    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:17:31.880807    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:17:31.895806    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:17:31.895817    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:17:31.928643    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:17:31.928650    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:17:31.939976    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:17:31.939987    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:17:31.944746    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:17:31.944755    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:17:31.980074    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:17:31.980085    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:17:31.992054    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:17:31.992065    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:17:32.003792    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:17:32.003802    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:17:34.520051    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:17:39.522361    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:17:39.522817    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:17:39.566674    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:17:39.566798    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:17:39.588361    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:17:39.588478    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:17:39.604054    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:17:39.604135    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:17:39.616030    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:17:39.616099    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:17:39.627313    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:17:39.627379    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:17:39.638728    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:17:39.638794    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:17:39.649726    5580 logs.go:276] 0 containers: []
	W0721 17:17:39.649737    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:17:39.649793    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:17:39.660404    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:17:39.660422    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:17:39.660427    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:17:39.673034    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:17:39.673047    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:17:39.684768    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:17:39.684779    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:17:39.718849    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:17:39.718863    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:17:39.733486    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:17:39.733495    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:17:39.747921    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:17:39.747934    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:17:39.760027    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:17:39.760039    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:17:39.772055    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:17:39.772066    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:17:39.794588    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:17:39.794597    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:17:39.805489    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:17:39.805500    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:17:39.817136    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:17:39.817148    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:17:39.850387    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:17:39.850395    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:17:39.864841    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:17:39.864852    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:17:39.869476    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:17:39.869482    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:17:39.881375    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:17:39.881388    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:17:42.408705    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:17:47.411036    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:17:47.411353    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:17:47.444212    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:17:47.444336    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:17:47.462844    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:17:47.462936    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:17:47.477317    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:17:47.477385    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:17:47.489453    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:17:47.489526    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:17:47.499969    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:17:47.500041    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:17:47.510663    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:17:47.510728    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:17:47.520954    5580 logs.go:276] 0 containers: []
	W0721 17:17:47.520966    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:17:47.521016    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:17:47.531696    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:17:47.531716    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:17:47.531721    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:17:47.545752    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:17:47.545762    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:17:47.557323    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:17:47.557337    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:17:47.572504    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:17:47.572515    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:17:47.589381    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:17:47.589391    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:17:47.612692    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:17:47.612699    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:17:47.629233    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:17:47.629246    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:17:47.641202    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:17:47.641212    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:17:47.653296    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:17:47.653308    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:17:47.665334    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:17:47.665346    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:17:47.669726    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:17:47.669736    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:17:47.703709    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:17:47.703723    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:17:47.714969    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:17:47.714981    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:17:47.726366    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:17:47.726380    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:17:47.760591    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:17:47.760599    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:17:50.275028    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:17:55.277775    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:17:55.278239    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:17:55.316832    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:17:55.316961    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:17:55.339104    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:17:55.339199    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:17:55.354107    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:17:55.354178    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:17:55.366703    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:17:55.366776    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:17:55.377524    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:17:55.377581    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:17:55.387874    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:17:55.387939    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:17:55.398159    5580 logs.go:276] 0 containers: []
	W0721 17:17:55.398169    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:17:55.398223    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:17:55.408720    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:17:55.408738    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:17:55.408743    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:17:55.444061    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:17:55.444071    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:17:55.459582    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:17:55.459594    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:17:55.473364    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:17:55.473376    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:17:55.485350    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:17:55.485363    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:17:55.503498    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:17:55.503508    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:17:55.527295    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:17:55.527302    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:17:55.538926    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:17:55.538937    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:17:55.550386    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:17:55.550400    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:17:55.563246    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:17:55.563258    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:17:55.574939    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:17:55.574952    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:17:55.586855    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:17:55.586869    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:17:55.590887    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:17:55.590895    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:17:55.625728    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:17:55.625738    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:17:55.640756    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:17:55.640770    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:17:58.156903    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:18:03.159537    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:18:03.159732    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:18:03.180267    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:18:03.180374    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:18:03.198782    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:18:03.198859    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:18:03.211773    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:18:03.211840    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:18:03.222609    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:18:03.222673    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:18:03.233001    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:18:03.233067    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:18:03.243722    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:18:03.243782    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:18:03.258986    5580 logs.go:276] 0 containers: []
	W0721 17:18:03.258997    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:18:03.259053    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:18:03.269256    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:18:03.269274    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:18:03.269279    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:18:03.280876    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:18:03.280889    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:18:03.298734    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:18:03.298746    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:18:03.333521    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:18:03.333531    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:18:03.344998    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:18:03.345012    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:18:03.359045    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:18:03.359057    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:18:03.370750    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:18:03.370760    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:18:03.382355    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:18:03.382366    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:18:03.393900    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:18:03.393910    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:18:03.418433    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:18:03.418440    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:18:03.453357    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:18:03.453370    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:18:03.467605    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:18:03.467616    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:18:03.483047    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:18:03.483058    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:18:03.495568    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:18:03.495579    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:18:03.500180    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:18:03.500188    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:18:06.014130    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:18:11.016285    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:18:11.016695    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:18:11.051586    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:18:11.051709    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:18:11.078697    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:18:11.078785    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:18:11.094413    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:18:11.094496    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:18:11.106059    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:18:11.106127    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:18:11.116950    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:18:11.117017    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:18:11.127393    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:18:11.127459    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:18:11.137263    5580 logs.go:276] 0 containers: []
	W0721 17:18:11.137279    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:18:11.137336    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:18:11.155923    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:18:11.155945    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:18:11.155952    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:18:11.167971    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:18:11.167982    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:18:11.172702    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:18:11.172710    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:18:11.187210    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:18:11.187222    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:18:11.198680    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:18:11.198693    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:18:11.211793    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:18:11.211804    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:18:11.237385    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:18:11.237393    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:18:11.272440    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:18:11.272451    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:18:11.307077    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:18:11.307089    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:18:11.319221    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:18:11.319231    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:18:11.330787    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:18:11.330796    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:18:11.344023    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:18:11.344038    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:18:11.356098    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:18:11.356108    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:18:11.373705    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:18:11.373713    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:18:11.389422    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:18:11.389435    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:18:13.908155    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:18:18.910556    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:18:18.910707    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:18:18.924011    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:18:18.924087    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:18:18.935542    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:18:18.935599    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:18:18.946136    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:18:18.946205    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:18:18.956443    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:18:18.956509    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:18:18.967491    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:18:18.967562    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:18:18.978160    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:18:18.978226    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:18:18.987917    5580 logs.go:276] 0 containers: []
	W0721 17:18:18.987931    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:18:18.987990    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:18:18.998190    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:18:18.998219    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:18:18.998226    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:18:19.015875    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:18:19.015888    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:18:19.027384    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:18:19.027394    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:18:19.041901    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:18:19.041912    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:18:19.054117    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:18:19.054125    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:18:19.088249    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:18:19.088260    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:18:19.103139    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:18:19.103148    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:18:19.114457    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:18:19.114469    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:18:19.137892    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:18:19.137898    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:18:19.170655    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:18:19.170662    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:18:19.182319    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:18:19.182328    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:18:19.186505    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:18:19.186515    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:18:19.204154    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:18:19.204166    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:18:19.217894    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:18:19.217908    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:18:19.231865    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:18:19.231875    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:18:21.745548    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:18:26.746460    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:18:26.746869    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:18:26.785995    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:18:26.786114    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:18:26.808020    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:18:26.808136    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:18:26.824369    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:18:26.824461    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:18:26.837302    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:18:26.837367    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:18:26.848198    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:18:26.848255    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:18:26.858809    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:18:26.858863    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:18:26.869365    5580 logs.go:276] 0 containers: []
	W0721 17:18:26.869380    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:18:26.869431    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:18:26.880008    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:18:26.880029    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:18:26.880033    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:18:26.884639    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:18:26.884648    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:18:26.920072    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:18:26.920084    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:18:26.931804    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:18:26.931819    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:18:26.944120    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:18:26.944133    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:18:26.959755    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:18:26.959766    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:18:26.977453    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:18:26.977463    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:18:27.011478    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:18:27.011486    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:18:27.029580    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:18:27.029591    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:18:27.041345    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:18:27.041355    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:18:27.053602    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:18:27.053612    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:18:27.064616    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:18:27.064626    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:18:27.078833    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:18:27.078843    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:18:27.092240    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:18:27.092253    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:18:27.111339    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:18:27.111349    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:18:29.638242    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:18:34.640431    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:18:34.640867    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:18:34.682828    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:18:34.682961    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:18:34.705189    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:18:34.705298    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:18:34.720221    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:18:34.720310    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:18:34.732962    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:18:34.733034    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:18:34.743988    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:18:34.744051    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:18:34.754896    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:18:34.754961    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:18:34.765616    5580 logs.go:276] 0 containers: []
	W0721 17:18:34.765634    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:18:34.765688    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:18:34.775658    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:18:34.775676    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:18:34.775680    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:18:34.787474    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:18:34.787487    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:18:34.802401    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:18:34.802410    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:18:34.826500    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:18:34.826507    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:18:34.830759    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:18:34.830767    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:18:34.865033    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:18:34.865042    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:18:34.878653    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:18:34.878661    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:18:34.890146    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:18:34.890156    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:18:34.901923    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:18:34.901931    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:18:34.914407    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:18:34.914420    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:18:34.925714    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:18:34.925723    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:18:34.937607    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:18:34.937620    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:18:34.949116    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:18:34.949130    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:18:34.982778    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:18:34.982788    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:18:35.001066    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:18:35.001075    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:18:37.518917    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:18:42.520420    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:18:42.520483    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:18:42.532211    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:18:42.532271    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:18:42.543193    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:18:42.543258    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:18:42.554294    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:18:42.554341    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:18:42.565675    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:18:42.565723    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:18:42.576769    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:18:42.576833    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:18:42.586662    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:18:42.586730    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:18:42.596677    5580 logs.go:276] 0 containers: []
	W0721 17:18:42.596688    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:18:42.596742    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:18:42.606887    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:18:42.606909    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:18:42.606914    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:18:42.621781    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:18:42.621793    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:18:42.633189    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:18:42.633200    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:18:42.645201    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:18:42.645214    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:18:42.655867    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:18:42.655878    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:18:42.667833    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:18:42.667843    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:18:42.681530    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:18:42.681541    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:18:42.693817    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:18:42.693827    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:18:42.717084    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:18:42.717091    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:18:42.721217    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:18:42.721226    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:18:42.754848    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:18:42.754861    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:18:42.769172    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:18:42.769183    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:18:42.786878    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:18:42.786889    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:18:42.820555    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:18:42.820564    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:18:42.834752    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:18:42.834765    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:18:45.347954    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:18:50.350586    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:18:50.351007    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:18:50.391074    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:18:50.391213    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:18:50.412975    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:18:50.413093    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:18:50.429151    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:18:50.429229    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:18:50.441844    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:18:50.441920    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:18:50.453455    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:18:50.453521    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:18:50.463916    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:18:50.463987    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:18:50.474499    5580 logs.go:276] 0 containers: []
	W0721 17:18:50.474511    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:18:50.474561    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:18:50.484935    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:18:50.484952    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:18:50.484957    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:18:50.520417    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:18:50.520426    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:18:50.524543    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:18:50.524552    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:18:50.541563    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:18:50.541574    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:18:50.553722    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:18:50.553732    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:18:50.578115    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:18:50.578122    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:18:50.589731    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:18:50.589743    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:18:50.628783    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:18:50.628794    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:18:50.643702    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:18:50.643714    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:18:50.656369    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:18:50.656384    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:18:50.668533    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:18:50.668545    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:18:50.683253    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:18:50.683264    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:18:50.698203    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:18:50.698215    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:18:50.709467    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:18:50.709480    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:18:50.724628    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:18:50.724639    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:18:53.250598    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:18:58.252645    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:18:58.252906    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:18:58.279742    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:18:58.279851    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:18:58.296867    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:18:58.296942    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:18:58.313220    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:18:58.313290    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:18:58.324114    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:18:58.324177    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:18:58.334680    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:18:58.334739    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:18:58.344982    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:18:58.345039    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:18:58.354479    5580 logs.go:276] 0 containers: []
	W0721 17:18:58.354489    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:18:58.354532    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:18:58.365142    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:18:58.365158    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:18:58.365162    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:18:58.376415    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:18:58.376429    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:18:58.394030    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:18:58.394041    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:18:58.405920    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:18:58.405934    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:18:58.444530    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:18:58.444540    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:18:58.459405    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:18:58.459415    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:18:58.483491    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:18:58.483500    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:18:58.496708    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:18:58.496722    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:18:58.508382    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:18:58.508392    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:18:58.521395    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:18:58.521405    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:18:58.533202    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:18:58.533212    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:18:58.547284    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:18:58.547292    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:18:58.559015    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:18:58.559024    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:18:58.573053    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:18:58.573061    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:18:58.605632    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:18:58.605639    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:19:01.111634    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:19:06.112447    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:19:06.112646    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:19:06.123492    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:19:06.123551    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:19:06.134891    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:19:06.134951    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:19:06.147864    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:19:06.147940    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:19:06.160458    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:19:06.160510    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:19:06.170733    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:19:06.170797    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:19:06.181577    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:19:06.181635    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:19:06.193667    5580 logs.go:276] 0 containers: []
	W0721 17:19:06.193704    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:19:06.193815    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:19:06.207223    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:19:06.207239    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:19:06.207244    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:19:06.231982    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:19:06.231996    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:19:06.246878    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:19:06.246887    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:19:06.282926    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:19:06.282943    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:19:06.340859    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:19:06.340869    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:19:06.354796    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:19:06.354810    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:19:06.370118    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:19:06.370128    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:19:06.384924    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:19:06.384936    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:19:06.397072    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:19:06.397084    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:19:06.410876    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:19:06.410885    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:19:06.423922    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:19:06.423931    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:19:06.435699    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:19:06.435710    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:19:06.453692    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:19:06.453702    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:19:06.458166    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:19:06.458174    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:19:06.472630    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:19:06.472644    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:19:08.991188    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:19:13.993351    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:19:13.993481    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:19:14.007676    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:19:14.007745    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:19:14.020045    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:19:14.020121    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:19:14.032507    5580 logs.go:276] 4 containers: [0f78041cc2e7 ba0dbe768c21 0e6ef086c383 ae732c1007fd]
	I0721 17:19:14.032584    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:19:14.048507    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:19:14.048577    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:19:14.060559    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:19:14.060640    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:19:14.077432    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:19:14.077511    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:19:14.089675    5580 logs.go:276] 0 containers: []
	W0721 17:19:14.089689    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:19:14.089754    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:19:14.101548    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:19:14.101566    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:19:14.101572    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:19:14.122625    5580 logs.go:123] Gathering logs for coredns [0e6ef086c383] ...
	I0721 17:19:14.122635    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6ef086c383"
	I0721 17:19:14.134731    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:19:14.134742    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:19:14.169639    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:19:14.169650    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:19:14.181344    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:19:14.181357    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:19:14.198308    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:19:14.198320    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:19:14.202593    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:19:14.202599    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:19:14.220990    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:19:14.221001    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:19:14.233810    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:19:14.233820    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:19:14.254455    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:19:14.254465    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:19:14.279387    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:19:14.279395    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:19:14.314125    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:19:14.314132    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:19:14.335853    5580 logs.go:123] Gathering logs for coredns [ae732c1007fd] ...
	I0721 17:19:14.335863    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae732c1007fd"
	I0721 17:19:14.347976    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:19:14.347987    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:19:14.358994    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:19:14.359003    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:19:16.874977    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:19:21.877005    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:19:21.877141    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0721 17:19:21.889817    5580 logs.go:276] 1 containers: [25adc97e7f62]
	I0721 17:19:21.889885    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0721 17:19:21.900424    5580 logs.go:276] 1 containers: [9e443788c208]
	I0721 17:19:21.900498    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0721 17:19:21.910972    5580 logs.go:276] 4 containers: [5e60763c68f3 efb00ef254ff 0f78041cc2e7 ba0dbe768c21]
	I0721 17:19:21.911046    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0721 17:19:21.921486    5580 logs.go:276] 1 containers: [4994893920eb]
	I0721 17:19:21.921557    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0721 17:19:21.931544    5580 logs.go:276] 1 containers: [f936b7818dac]
	I0721 17:19:21.931620    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0721 17:19:21.942226    5580 logs.go:276] 1 containers: [670eaf06327d]
	I0721 17:19:21.942291    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0721 17:19:21.952740    5580 logs.go:276] 0 containers: []
	W0721 17:19:21.952752    5580 logs.go:278] No container was found matching "kindnet"
	I0721 17:19:21.952806    5580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0721 17:19:21.963196    5580 logs.go:276] 1 containers: [8c85131f9fc9]
	I0721 17:19:21.963216    5580 logs.go:123] Gathering logs for coredns [5e60763c68f3] ...
	I0721 17:19:21.963222    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60763c68f3"
	I0721 17:19:21.974264    5580 logs.go:123] Gathering logs for coredns [0f78041cc2e7] ...
	I0721 17:19:21.974277    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f78041cc2e7"
	I0721 17:19:21.986099    5580 logs.go:123] Gathering logs for kube-proxy [f936b7818dac] ...
	I0721 17:19:21.986108    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f936b7818dac"
	I0721 17:19:21.997513    5580 logs.go:123] Gathering logs for dmesg ...
	I0721 17:19:21.997525    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0721 17:19:22.002242    5580 logs.go:123] Gathering logs for kube-controller-manager [670eaf06327d] ...
	I0721 17:19:22.002249    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670eaf06327d"
	I0721 17:19:22.020034    5580 logs.go:123] Gathering logs for storage-provisioner [8c85131f9fc9] ...
	I0721 17:19:22.020047    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c85131f9fc9"
	I0721 17:19:22.034939    5580 logs.go:123] Gathering logs for Docker ...
	I0721 17:19:22.034953    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0721 17:19:22.059365    5580 logs.go:123] Gathering logs for container status ...
	I0721 17:19:22.059374    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0721 17:19:22.071698    5580 logs.go:123] Gathering logs for etcd [9e443788c208] ...
	I0721 17:19:22.071711    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e443788c208"
	I0721 17:19:22.086507    5580 logs.go:123] Gathering logs for coredns [efb00ef254ff] ...
	I0721 17:19:22.086517    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb00ef254ff"
	I0721 17:19:22.097662    5580 logs.go:123] Gathering logs for coredns [ba0dbe768c21] ...
	I0721 17:19:22.097674    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba0dbe768c21"
	I0721 17:19:22.109186    5580 logs.go:123] Gathering logs for kube-scheduler [4994893920eb] ...
	I0721 17:19:22.109198    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4994893920eb"
	I0721 17:19:22.124396    5580 logs.go:123] Gathering logs for kubelet ...
	I0721 17:19:22.124407    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0721 17:19:22.159043    5580 logs.go:123] Gathering logs for describe nodes ...
	I0721 17:19:22.159051    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0721 17:19:22.192476    5580 logs.go:123] Gathering logs for kube-apiserver [25adc97e7f62] ...
	I0721 17:19:22.192489    5580 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25adc97e7f62"
	I0721 17:19:24.712263    5580 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0721 17:19:29.714912    5580 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0721 17:19:29.718678    5580 out.go:177] 
	W0721 17:19:29.722742    5580 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0721 17:19:29.722751    5580 out.go:239] * 
	* 
	W0721 17:19:29.723453    5580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:19:29.738642    5580 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-930000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (580.86s)

                                                
                                    
x
+
TestPause/serial/Start (9.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-756000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-756000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.797359542s)

                                                
                                                
-- stdout --
	* [pause-756000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-756000" primary control-plane node in "pause-756000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-756000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-756000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-756000 -n pause-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-756000 -n pause-756000: exit status 7 (57.619708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-731000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-731000 --driver=qemu2 : exit status 80 (9.853716583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-731000" primary control-plane node in "NoKubernetes-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-731000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-731000 -n NoKubernetes-731000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-731000 -n NoKubernetes-731000: exit status 7 (66.583917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-731000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-731000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-731000 --no-kubernetes --driver=qemu2 : exit status 80 (5.251622958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-731000
	* Restarting existing qemu2 VM for "NoKubernetes-731000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-731000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-731000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-731000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-731000 -n NoKubernetes-731000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-731000 -n NoKubernetes-731000: exit status 7 (54.482083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-731000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-731000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-731000 --no-kubernetes --driver=qemu2 : exit status 80 (5.22773925s)

                                                
                                                
-- stdout --
	* [NoKubernetes-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-731000
	* Restarting existing qemu2 VM for "NoKubernetes-731000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-731000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-731000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-731000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-731000 -n NoKubernetes-731000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-731000 -n NoKubernetes-731000: exit status 7 (48.864584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-731000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-731000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-731000 --driver=qemu2 : exit status 80 (5.270419625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-731000
	* Restarting existing qemu2 VM for "NoKubernetes-731000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-731000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-731000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-731000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-731000 -n NoKubernetes-731000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-731000 -n NoKubernetes-731000: exit status 7 (45.237375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-731000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.940143875s)

                                                
                                                
-- stdout --
	* [auto-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-396000" primary control-plane node in "auto-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:17:41.747240    5827 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:17:41.747429    5827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:17:41.747432    5827 out.go:304] Setting ErrFile to fd 2...
	I0721 17:17:41.747434    5827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:17:41.747566    5827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:17:41.748587    5827 out.go:298] Setting JSON to false
	I0721 17:17:41.764591    5827 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4624,"bootTime":1721602837,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:17:41.764781    5827 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:17:41.771065    5827 out.go:177] * [auto-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:17:41.778068    5827 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:17:41.778079    5827 notify.go:220] Checking for updates...
	I0721 17:17:41.784961    5827 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:17:41.788066    5827 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:17:41.791086    5827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:17:41.794024    5827 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:17:41.797012    5827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:17:41.800377    5827 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:17:41.800447    5827 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:17:41.800501    5827 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:17:41.803898    5827 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:17:41.811072    5827 start.go:297] selected driver: qemu2
	I0721 17:17:41.811078    5827 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:17:41.811085    5827 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:17:41.813359    5827 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:17:41.814577    5827 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:17:41.817097    5827 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:17:41.817120    5827 cni.go:84] Creating CNI manager for ""
	I0721 17:17:41.817130    5827 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:17:41.817134    5827 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:17:41.817171    5827 start.go:340] cluster config:
	{Name:auto-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:17:41.820753    5827 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:17:41.827921    5827 out.go:177] * Starting "auto-396000" primary control-plane node in "auto-396000" cluster
	I0721 17:17:41.831979    5827 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:17:41.831993    5827 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:17:41.832006    5827 cache.go:56] Caching tarball of preloaded images
	I0721 17:17:41.832060    5827 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:17:41.832066    5827 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:17:41.832133    5827 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/auto-396000/config.json ...
	I0721 17:17:41.832145    5827 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/auto-396000/config.json: {Name:mk4e2bd4020ab13ea275e3e7b70349eae08a68bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:17:41.832467    5827 start.go:360] acquireMachinesLock for auto-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:17:41.832496    5827 start.go:364] duration metric: took 23.833µs to acquireMachinesLock for "auto-396000"
	I0721 17:17:41.832505    5827 start.go:93] Provisioning new machine with config: &{Name:auto-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:17:41.832535    5827 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:17:41.840003    5827 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:17:41.855008    5827 start.go:159] libmachine.API.Create for "auto-396000" (driver="qemu2")
	I0721 17:17:41.855039    5827 client.go:168] LocalClient.Create starting
	I0721 17:17:41.855101    5827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:17:41.855133    5827 main.go:141] libmachine: Decoding PEM data...
	I0721 17:17:41.855143    5827 main.go:141] libmachine: Parsing certificate...
	I0721 17:17:41.855188    5827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:17:41.855216    5827 main.go:141] libmachine: Decoding PEM data...
	I0721 17:17:41.855225    5827 main.go:141] libmachine: Parsing certificate...
	I0721 17:17:41.855646    5827 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:17:42.014942    5827 main.go:141] libmachine: Creating SSH key...
	I0721 17:17:42.136158    5827 main.go:141] libmachine: Creating Disk image...
	I0721 17:17:42.136167    5827 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:17:42.136346    5827 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2
	I0721 17:17:42.145775    5827 main.go:141] libmachine: STDOUT: 
	I0721 17:17:42.145795    5827 main.go:141] libmachine: STDERR: 
	I0721 17:17:42.145858    5827 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2 +20000M
	I0721 17:17:42.153861    5827 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:17:42.153877    5827 main.go:141] libmachine: STDERR: 
	I0721 17:17:42.153892    5827 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2
	I0721 17:17:42.153898    5827 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:17:42.153913    5827 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:17:42.153937    5827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:4b:d4:e7:0c:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2
	I0721 17:17:42.155532    5827 main.go:141] libmachine: STDOUT: 
	I0721 17:17:42.155549    5827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:17:42.155565    5827 client.go:171] duration metric: took 300.531542ms to LocalClient.Create
	I0721 17:17:44.157626    5827 start.go:128] duration metric: took 2.325139542s to createHost
	I0721 17:17:44.157673    5827 start.go:83] releasing machines lock for "auto-396000", held for 2.32522925s
	W0721 17:17:44.157721    5827 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:17:44.169076    5827 out.go:177] * Deleting "auto-396000" in qemu2 ...
	W0721 17:17:44.188736    5827 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:17:44.188756    5827 start.go:729] Will try again in 5 seconds ...
	I0721 17:17:49.190860    5827 start.go:360] acquireMachinesLock for auto-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:17:49.191531    5827 start.go:364] duration metric: took 508.75µs to acquireMachinesLock for "auto-396000"
	I0721 17:17:49.191614    5827 start.go:93] Provisioning new machine with config: &{Name:auto-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:17:49.191960    5827 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:17:49.197661    5827 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:17:49.248469    5827 start.go:159] libmachine.API.Create for "auto-396000" (driver="qemu2")
	I0721 17:17:49.248523    5827 client.go:168] LocalClient.Create starting
	I0721 17:17:49.248660    5827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:17:49.248731    5827 main.go:141] libmachine: Decoding PEM data...
	I0721 17:17:49.248746    5827 main.go:141] libmachine: Parsing certificate...
	I0721 17:17:49.248824    5827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:17:49.248872    5827 main.go:141] libmachine: Decoding PEM data...
	I0721 17:17:49.248895    5827 main.go:141] libmachine: Parsing certificate...
	I0721 17:17:49.249419    5827 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:17:49.400155    5827 main.go:141] libmachine: Creating SSH key...
	I0721 17:17:49.590715    5827 main.go:141] libmachine: Creating Disk image...
	I0721 17:17:49.590725    5827 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:17:49.590919    5827 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2
	I0721 17:17:49.600607    5827 main.go:141] libmachine: STDOUT: 
	I0721 17:17:49.600631    5827 main.go:141] libmachine: STDERR: 
	I0721 17:17:49.600684    5827 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2 +20000M
	I0721 17:17:49.608816    5827 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:17:49.608830    5827 main.go:141] libmachine: STDERR: 
	I0721 17:17:49.608848    5827 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2
	I0721 17:17:49.608866    5827 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:17:49.608875    5827 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:17:49.608902    5827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:d8:1b:7d:74:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/auto-396000/disk.qcow2
	I0721 17:17:49.610599    5827 main.go:141] libmachine: STDOUT: 
	I0721 17:17:49.610708    5827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:17:49.610719    5827 client.go:171] duration metric: took 362.197958ms to LocalClient.Create
	I0721 17:17:51.612894    5827 start.go:128] duration metric: took 2.420958708s to createHost
	I0721 17:17:51.612966    5827 start.go:83] releasing machines lock for "auto-396000", held for 2.421472291s
	W0721 17:17:51.613363    5827 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:17:51.627971    5827 out.go:177] 
	W0721 17:17:51.632279    5827 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:17:51.632303    5827 out.go:239] * 
	* 
	W0721 17:17:51.634711    5827 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:17:51.646066    5827 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.795896375s)

                                                
                                                
-- stdout --
	* [calico-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-396000" primary control-plane node in "calico-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:17:53.771619    5941 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:17:53.771771    5941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:17:53.771774    5941 out.go:304] Setting ErrFile to fd 2...
	I0721 17:17:53.771776    5941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:17:53.771919    5941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:17:53.773109    5941 out.go:298] Setting JSON to false
	I0721 17:17:53.789596    5941 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4636,"bootTime":1721602837,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:17:53.789668    5941 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:17:53.795929    5941 out.go:177] * [calico-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:17:53.803906    5941 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:17:53.803945    5941 notify.go:220] Checking for updates...
	I0721 17:17:53.810936    5941 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:17:53.813883    5941 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:17:53.816883    5941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:17:53.819802    5941 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:17:53.822855    5941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:17:53.826224    5941 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:17:53.826294    5941 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:17:53.826339    5941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:17:53.830863    5941 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:17:53.837841    5941 start.go:297] selected driver: qemu2
	I0721 17:17:53.837846    5941 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:17:53.837851    5941 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:17:53.840040    5941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:17:53.842875    5941 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:17:53.845923    5941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:17:53.845953    5941 cni.go:84] Creating CNI manager for "calico"
	I0721 17:17:53.845962    5941 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0721 17:17:53.845986    5941 start.go:340] cluster config:
	{Name:calico-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:17:53.849342    5941 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:17:53.856880    5941 out.go:177] * Starting "calico-396000" primary control-plane node in "calico-396000" cluster
	I0721 17:17:53.860822    5941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:17:53.860842    5941 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:17:53.860853    5941 cache.go:56] Caching tarball of preloaded images
	I0721 17:17:53.860920    5941 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:17:53.860926    5941 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:17:53.860988    5941 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/calico-396000/config.json ...
	I0721 17:17:53.861003    5941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/calico-396000/config.json: {Name:mk89a3f043ed8f706cdc90afba5d22e6f52f8832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:17:53.861215    5941 start.go:360] acquireMachinesLock for calico-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:17:53.861245    5941 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "calico-396000"
	I0721 17:17:53.861255    5941 start.go:93] Provisioning new machine with config: &{Name:calico-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:17:53.861276    5941 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:17:53.869877    5941 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:17:53.885821    5941 start.go:159] libmachine.API.Create for "calico-396000" (driver="qemu2")
	I0721 17:17:53.885841    5941 client.go:168] LocalClient.Create starting
	I0721 17:17:53.885895    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:17:53.885925    5941 main.go:141] libmachine: Decoding PEM data...
	I0721 17:17:53.885934    5941 main.go:141] libmachine: Parsing certificate...
	I0721 17:17:53.885972    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:17:53.886002    5941 main.go:141] libmachine: Decoding PEM data...
	I0721 17:17:53.886010    5941 main.go:141] libmachine: Parsing certificate...
	I0721 17:17:53.886323    5941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:17:54.028276    5941 main.go:141] libmachine: Creating SSH key...
	I0721 17:17:54.157755    5941 main.go:141] libmachine: Creating Disk image...
	I0721 17:17:54.157763    5941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:17:54.157933    5941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2
	I0721 17:17:54.167204    5941 main.go:141] libmachine: STDOUT: 
	I0721 17:17:54.167228    5941 main.go:141] libmachine: STDERR: 
	I0721 17:17:54.167284    5941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2 +20000M
	I0721 17:17:54.175798    5941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:17:54.175818    5941 main.go:141] libmachine: STDERR: 
	I0721 17:17:54.175837    5941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2
	I0721 17:17:54.175840    5941 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:17:54.175856    5941 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:17:54.175883    5941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:4c:f6:d0:e4:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2
	I0721 17:17:54.177623    5941 main.go:141] libmachine: STDOUT: 
	I0721 17:17:54.177640    5941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:17:54.177657    5941 client.go:171] duration metric: took 291.820875ms to LocalClient.Create
	I0721 17:17:56.179791    5941 start.go:128] duration metric: took 2.318547625s to createHost
	I0721 17:17:56.179858    5941 start.go:83] releasing machines lock for "calico-396000", held for 2.318670416s
	W0721 17:17:56.179941    5941 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:17:56.189831    5941 out.go:177] * Deleting "calico-396000" in qemu2 ...
	W0721 17:17:56.213083    5941 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:17:56.213115    5941 start.go:729] Will try again in 5 seconds ...
	I0721 17:18:01.215239    5941 start.go:360] acquireMachinesLock for calico-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:01.215643    5941 start.go:364] duration metric: took 316.209µs to acquireMachinesLock for "calico-396000"
	I0721 17:18:01.215689    5941 start.go:93] Provisioning new machine with config: &{Name:calico-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:01.215903    5941 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:01.223254    5941 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:01.257247    5941 start.go:159] libmachine.API.Create for "calico-396000" (driver="qemu2")
	I0721 17:18:01.257294    5941 client.go:168] LocalClient.Create starting
	I0721 17:18:01.257397    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:01.257455    5941 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:01.257469    5941 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:01.257529    5941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:01.257574    5941 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:01.257584    5941 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:01.258055    5941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:01.404146    5941 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:01.485586    5941 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:01.485595    5941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:01.485762    5941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2
	I0721 17:18:01.495286    5941 main.go:141] libmachine: STDOUT: 
	I0721 17:18:01.495306    5941 main.go:141] libmachine: STDERR: 
	I0721 17:18:01.495357    5941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2 +20000M
	I0721 17:18:01.503450    5941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:01.503463    5941 main.go:141] libmachine: STDERR: 
	I0721 17:18:01.503474    5941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2
	I0721 17:18:01.503479    5941 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:01.503490    5941 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:01.503515    5941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:cf:47:e3:57:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/calico-396000/disk.qcow2
	I0721 17:18:01.505188    5941 main.go:141] libmachine: STDOUT: 
	I0721 17:18:01.505204    5941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:01.505216    5941 client.go:171] duration metric: took 247.92375ms to LocalClient.Create
	I0721 17:18:03.507261    5941 start.go:128] duration metric: took 2.291407917s to createHost
	I0721 17:18:03.507277    5941 start.go:83] releasing machines lock for "calico-396000", held for 2.291684167s
	W0721 17:18:03.507353    5941 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:03.516630    5941 out.go:177] 
	W0721 17:18:03.520555    5941 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:18:03.520561    5941 out.go:239] * 
	* 
	W0721 17:18:03.521080    5941 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:18:03.530554    5941 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
E0721 17:18:09.286153    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.879295334s)

                                                
                                                
-- stdout --
	* [custom-flannel-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-396000" primary control-plane node in "custom-flannel-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:18:05.839426    6062 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:18:05.839559    6062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:05.839563    6062 out.go:304] Setting ErrFile to fd 2...
	I0721 17:18:05.839565    6062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:05.839686    6062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:18:05.840791    6062 out.go:298] Setting JSON to false
	I0721 17:18:05.856717    6062 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4648,"bootTime":1721602837,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:18:05.856786    6062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:18:05.862277    6062 out.go:177] * [custom-flannel-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:18:05.870526    6062 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:18:05.870668    6062 notify.go:220] Checking for updates...
	I0721 17:18:05.877466    6062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:18:05.880580    6062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:18:05.883365    6062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:18:05.886436    6062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:18:05.889472    6062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:18:05.891096    6062 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:18:05.891160    6062 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:18:05.891210    6062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:18:05.895424    6062 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:18:05.902283    6062 start.go:297] selected driver: qemu2
	I0721 17:18:05.902291    6062 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:18:05.902298    6062 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:18:05.904529    6062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:18:05.907455    6062 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:18:05.910495    6062 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:18:05.910512    6062 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0721 17:18:05.910520    6062 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0721 17:18:05.910550    6062 start.go:340] cluster config:
	{Name:custom-flannel-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:18:05.914065    6062 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:18:05.921453    6062 out.go:177] * Starting "custom-flannel-396000" primary control-plane node in "custom-flannel-396000" cluster
	I0721 17:18:05.925394    6062 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:18:05.925410    6062 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:18:05.925430    6062 cache.go:56] Caching tarball of preloaded images
	I0721 17:18:05.925494    6062 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:18:05.925499    6062 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:18:05.925552    6062 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/custom-flannel-396000/config.json ...
	I0721 17:18:05.925564    6062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/custom-flannel-396000/config.json: {Name:mk8424075a653a9de41371dd3d34e0da86d975f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:18:05.925770    6062 start.go:360] acquireMachinesLock for custom-flannel-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:05.925808    6062 start.go:364] duration metric: took 28µs to acquireMachinesLock for "custom-flannel-396000"
	I0721 17:18:05.925818    6062 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:05.925850    6062 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:05.934435    6062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:05.951251    6062 start.go:159] libmachine.API.Create for "custom-flannel-396000" (driver="qemu2")
	I0721 17:18:05.951277    6062 client.go:168] LocalClient.Create starting
	I0721 17:18:05.951350    6062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:05.951384    6062 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:05.951394    6062 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:05.951434    6062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:05.951458    6062 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:05.951466    6062 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:05.951843    6062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:06.093756    6062 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:06.190203    6062 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:06.190211    6062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:06.190361    6062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2
	I0721 17:18:06.199709    6062 main.go:141] libmachine: STDOUT: 
	I0721 17:18:06.199727    6062 main.go:141] libmachine: STDERR: 
	I0721 17:18:06.199781    6062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2 +20000M
	I0721 17:18:06.208083    6062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:06.208107    6062 main.go:141] libmachine: STDERR: 
	I0721 17:18:06.208123    6062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2
	I0721 17:18:06.208128    6062 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:06.208139    6062 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:06.208173    6062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:fc:d9:5d:bf:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2
	I0721 17:18:06.210086    6062 main.go:141] libmachine: STDOUT: 
	I0721 17:18:06.210105    6062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:06.210123    6062 client.go:171] duration metric: took 258.849375ms to LocalClient.Create
	I0721 17:18:08.212252    6062 start.go:128] duration metric: took 2.286441209s to createHost
	I0721 17:18:08.212360    6062 start.go:83] releasing machines lock for "custom-flannel-396000", held for 2.286607417s
	W0721 17:18:08.212416    6062 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:08.223333    6062 out.go:177] * Deleting "custom-flannel-396000" in qemu2 ...
	W0721 17:18:08.246726    6062 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:08.246750    6062 start.go:729] Will try again in 5 seconds ...
	I0721 17:18:13.248815    6062 start.go:360] acquireMachinesLock for custom-flannel-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:13.249074    6062 start.go:364] duration metric: took 211.375µs to acquireMachinesLock for "custom-flannel-396000"
	I0721 17:18:13.249141    6062 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:13.249240    6062 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:13.257594    6062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:13.289524    6062 start.go:159] libmachine.API.Create for "custom-flannel-396000" (driver="qemu2")
	I0721 17:18:13.289569    6062 client.go:168] LocalClient.Create starting
	I0721 17:18:13.289660    6062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:13.289707    6062 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:13.289720    6062 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:13.289772    6062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:13.289807    6062 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:13.289820    6062 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:13.290279    6062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:13.434672    6062 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:13.627153    6062 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:13.627173    6062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:13.627358    6062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2
	I0721 17:18:13.637358    6062 main.go:141] libmachine: STDOUT: 
	I0721 17:18:13.637380    6062 main.go:141] libmachine: STDERR: 
	I0721 17:18:13.637440    6062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2 +20000M
	I0721 17:18:13.645598    6062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:13.645613    6062 main.go:141] libmachine: STDERR: 
	I0721 17:18:13.645631    6062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2
	I0721 17:18:13.645634    6062 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:13.645644    6062 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:13.645673    6062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f1:6b:92:a3:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/custom-flannel-396000/disk.qcow2
	I0721 17:18:13.647382    6062 main.go:141] libmachine: STDOUT: 
	I0721 17:18:13.647402    6062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:13.647414    6062 client.go:171] duration metric: took 357.850333ms to LocalClient.Create
	I0721 17:18:15.649574    6062 start.go:128] duration metric: took 2.400365667s to createHost
	I0721 17:18:15.649748    6062 start.go:83] releasing machines lock for "custom-flannel-396000", held for 2.400689209s
	W0721 17:18:15.650071    6062 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:15.663688    6062 out.go:177] 
	W0721 17:18:15.667706    6062 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:18:15.667723    6062 out.go:239] * 
	* 
	W0721 17:18:15.669327    6062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:18:15.682587    6062 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.9338885s)

                                                
                                                
-- stdout --
	* [false-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-396000" primary control-plane node in "false-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:18:18.039052    6180 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:18:18.039174    6180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:18.039178    6180 out.go:304] Setting ErrFile to fd 2...
	I0721 17:18:18.039181    6180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:18.039324    6180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:18:18.040474    6180 out.go:298] Setting JSON to false
	I0721 17:18:18.056914    6180 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4661,"bootTime":1721602837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:18:18.056985    6180 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:18:18.062378    6180 out.go:177] * [false-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:18:18.070404    6180 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:18:18.070458    6180 notify.go:220] Checking for updates...
	I0721 17:18:18.077388    6180 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:18:18.080362    6180 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:18:18.083389    6180 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:18:18.086386    6180 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:18:18.089346    6180 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:18:18.092766    6180 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:18:18.092838    6180 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:18:18.092882    6180 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:18:18.097430    6180 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:18:18.104365    6180 start.go:297] selected driver: qemu2
	I0721 17:18:18.104373    6180 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:18:18.104379    6180 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:18:18.106494    6180 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:18:18.109406    6180 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:18:18.112360    6180 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:18:18.112413    6180 cni.go:84] Creating CNI manager for "false"
	I0721 17:18:18.112457    6180 start.go:340] cluster config:
	{Name:false-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:18:18.115862    6180 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:18:18.123418    6180 out.go:177] * Starting "false-396000" primary control-plane node in "false-396000" cluster
	I0721 17:18:18.127324    6180 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:18:18.127342    6180 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:18:18.127354    6180 cache.go:56] Caching tarball of preloaded images
	I0721 17:18:18.127420    6180 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:18:18.127425    6180 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:18:18.127489    6180 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/false-396000/config.json ...
	I0721 17:18:18.127502    6180 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/false-396000/config.json: {Name:mk8f7c4fc9fd4036c20bf6696dd2a7dfd5ebfc5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:18:18.127831    6180 start.go:360] acquireMachinesLock for false-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:18.127862    6180 start.go:364] duration metric: took 25.583µs to acquireMachinesLock for "false-396000"
	I0721 17:18:18.127871    6180 start.go:93] Provisioning new machine with config: &{Name:false-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:18.127902    6180 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:18.136339    6180 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:18.152573    6180 start.go:159] libmachine.API.Create for "false-396000" (driver="qemu2")
	I0721 17:18:18.152612    6180 client.go:168] LocalClient.Create starting
	I0721 17:18:18.152676    6180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:18.152708    6180 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:18.152718    6180 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:18.152757    6180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:18.152779    6180 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:18.152790    6180 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:18.153205    6180 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:18.322930    6180 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:18.573208    6180 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:18.573218    6180 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:18.573413    6180 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2
	I0721 17:18:18.583173    6180 main.go:141] libmachine: STDOUT: 
	I0721 17:18:18.583195    6180 main.go:141] libmachine: STDERR: 
	I0721 17:18:18.583252    6180 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2 +20000M
	I0721 17:18:18.591249    6180 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:18.591262    6180 main.go:141] libmachine: STDERR: 
	I0721 17:18:18.591279    6180 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2
	I0721 17:18:18.591284    6180 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:18.591298    6180 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:18.591328    6180 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:c2:82:31:54:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2
	I0721 17:18:18.592995    6180 main.go:141] libmachine: STDOUT: 
	I0721 17:18:18.593008    6180 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:18.593027    6180 client.go:171] duration metric: took 440.420167ms to LocalClient.Create
	I0721 17:18:20.595141    6180 start.go:128] duration metric: took 2.467281084s to createHost
	I0721 17:18:20.595235    6180 start.go:83] releasing machines lock for "false-396000", held for 2.467434917s
	W0721 17:18:20.595284    6180 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:20.605972    6180 out.go:177] * Deleting "false-396000" in qemu2 ...
	W0721 17:18:20.628350    6180 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:20.628375    6180 start.go:729] Will try again in 5 seconds ...
	I0721 17:18:25.630442    6180 start.go:360] acquireMachinesLock for false-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:25.631114    6180 start.go:364] duration metric: took 438.042µs to acquireMachinesLock for "false-396000"
	I0721 17:18:25.631253    6180 start.go:93] Provisioning new machine with config: &{Name:false-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:25.631521    6180 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:25.641081    6180 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:25.690187    6180 start.go:159] libmachine.API.Create for "false-396000" (driver="qemu2")
	I0721 17:18:25.690245    6180 client.go:168] LocalClient.Create starting
	I0721 17:18:25.690351    6180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:25.690414    6180 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:25.690433    6180 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:25.690489    6180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:25.690534    6180 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:25.690559    6180 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:25.691045    6180 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:25.841596    6180 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:25.881790    6180 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:25.881794    6180 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:25.881948    6180 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2
	I0721 17:18:25.891346    6180 main.go:141] libmachine: STDOUT: 
	I0721 17:18:25.891366    6180 main.go:141] libmachine: STDERR: 
	I0721 17:18:25.891431    6180 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2 +20000M
	I0721 17:18:25.899573    6180 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:25.899586    6180 main.go:141] libmachine: STDERR: 
	I0721 17:18:25.899604    6180 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2
	I0721 17:18:25.899608    6180 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:25.899619    6180 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:25.899652    6180 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:3b:25:62:45:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/false-396000/disk.qcow2
	I0721 17:18:25.901319    6180 main.go:141] libmachine: STDOUT: 
	I0721 17:18:25.901333    6180 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:25.901346    6180 client.go:171] duration metric: took 211.100417ms to LocalClient.Create
	I0721 17:18:27.903500    6180 start.go:128] duration metric: took 2.271997792s to createHost
	I0721 17:18:27.903695    6180 start.go:83] releasing machines lock for "false-396000", held for 2.27253925s
	W0721 17:18:27.904093    6180 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:27.912701    6180 out.go:177] 
	W0721 17:18:27.918872    6180 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:18:27.918904    6180 out.go:239] * 
	* 
	W0721 17:18:27.921404    6180 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:18:27.929777    6180 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.830194625s)

                                                
                                                
-- stdout --
	* [kindnet-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-396000" primary control-plane node in "kindnet-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:18:30.134384    6289 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:18:30.134520    6289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:30.134523    6289 out.go:304] Setting ErrFile to fd 2...
	I0721 17:18:30.134526    6289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:30.134655    6289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:18:30.135661    6289 out.go:298] Setting JSON to false
	I0721 17:18:30.151775    6289 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4673,"bootTime":1721602837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:18:30.151843    6289 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:18:30.161471    6289 out.go:177] * [kindnet-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:18:30.165579    6289 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:18:30.165611    6289 notify.go:220] Checking for updates...
	I0721 17:18:30.172522    6289 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:18:30.175495    6289 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:18:30.178614    6289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:18:30.181489    6289 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:18:30.184518    6289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:18:30.187820    6289 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:18:30.187879    6289 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:18:30.187934    6289 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:18:30.192496    6289 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:18:30.199478    6289 start.go:297] selected driver: qemu2
	I0721 17:18:30.199484    6289 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:18:30.199490    6289 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:18:30.201665    6289 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:18:30.204442    6289 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:18:30.207591    6289 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:18:30.207626    6289 cni.go:84] Creating CNI manager for "kindnet"
	I0721 17:18:30.207630    6289 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0721 17:18:30.207667    6289 start.go:340] cluster config:
	{Name:kindnet-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:18:30.211377    6289 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:18:30.216492    6289 out.go:177] * Starting "kindnet-396000" primary control-plane node in "kindnet-396000" cluster
	I0721 17:18:30.220512    6289 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:18:30.220525    6289 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:18:30.220535    6289 cache.go:56] Caching tarball of preloaded images
	I0721 17:18:30.220588    6289 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:18:30.220594    6289 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:18:30.220642    6289 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/kindnet-396000/config.json ...
	I0721 17:18:30.220654    6289 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/kindnet-396000/config.json: {Name:mka0b78dfdce1db3e497a989276dfbea307d8392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:18:30.220897    6289 start.go:360] acquireMachinesLock for kindnet-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:30.220928    6289 start.go:364] duration metric: took 26µs to acquireMachinesLock for "kindnet-396000"
	I0721 17:18:30.220937    6289 start.go:93] Provisioning new machine with config: &{Name:kindnet-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:30.220964    6289 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:30.228549    6289 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:30.243534    6289 start.go:159] libmachine.API.Create for "kindnet-396000" (driver="qemu2")
	I0721 17:18:30.243563    6289 client.go:168] LocalClient.Create starting
	I0721 17:18:30.243624    6289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:30.243657    6289 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:30.243672    6289 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:30.243703    6289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:30.243729    6289 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:30.243739    6289 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:30.244061    6289 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:30.384062    6289 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:30.530248    6289 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:30.530259    6289 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:30.530442    6289 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2
	I0721 17:18:30.539577    6289 main.go:141] libmachine: STDOUT: 
	I0721 17:18:30.539595    6289 main.go:141] libmachine: STDERR: 
	I0721 17:18:30.539646    6289 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2 +20000M
	I0721 17:18:30.547474    6289 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:30.547487    6289 main.go:141] libmachine: STDERR: 
	I0721 17:18:30.547514    6289 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2
	I0721 17:18:30.547518    6289 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:30.547531    6289 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:30.547567    6289 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:74:9b:d6:37:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2
	I0721 17:18:30.549201    6289 main.go:141] libmachine: STDOUT: 
	I0721 17:18:30.549217    6289 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:30.549234    6289 client.go:171] duration metric: took 305.674292ms to LocalClient.Create
	I0721 17:18:32.551376    6289 start.go:128] duration metric: took 2.330452834s to createHost
	I0721 17:18:32.551474    6289 start.go:83] releasing machines lock for "kindnet-396000", held for 2.330602833s
	W0721 17:18:32.551522    6289 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:32.558318    6289 out.go:177] * Deleting "kindnet-396000" in qemu2 ...
	W0721 17:18:32.580316    6289 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:32.580341    6289 start.go:729] Will try again in 5 seconds ...
	I0721 17:18:37.582438    6289 start.go:360] acquireMachinesLock for kindnet-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:37.582991    6289 start.go:364] duration metric: took 445.875µs to acquireMachinesLock for "kindnet-396000"
	I0721 17:18:37.583061    6289 start.go:93] Provisioning new machine with config: &{Name:kindnet-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:37.583378    6289 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:37.589027    6289 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:37.639171    6289 start.go:159] libmachine.API.Create for "kindnet-396000" (driver="qemu2")
	I0721 17:18:37.639229    6289 client.go:168] LocalClient.Create starting
	I0721 17:18:37.639369    6289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:37.639440    6289 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:37.639465    6289 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:37.639526    6289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:37.639572    6289 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:37.639585    6289 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:37.640136    6289 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:37.789312    6289 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:37.874013    6289 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:37.874019    6289 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:37.874203    6289 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2
	I0721 17:18:37.883714    6289 main.go:141] libmachine: STDOUT: 
	I0721 17:18:37.883730    6289 main.go:141] libmachine: STDERR: 
	I0721 17:18:37.883778    6289 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2 +20000M
	I0721 17:18:37.891696    6289 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:37.891711    6289 main.go:141] libmachine: STDERR: 
	I0721 17:18:37.891722    6289 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2
	I0721 17:18:37.891729    6289 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:37.891746    6289 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:37.891771    6289 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a4:84:72:11:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kindnet-396000/disk.qcow2
	I0721 17:18:37.893423    6289 main.go:141] libmachine: STDOUT: 
	I0721 17:18:37.893437    6289 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:37.893450    6289 client.go:171] duration metric: took 254.223375ms to LocalClient.Create
	I0721 17:18:39.895609    6289 start.go:128] duration metric: took 2.312234958s to createHost
	I0721 17:18:39.895710    6289 start.go:83] releasing machines lock for "kindnet-396000", held for 2.312755334s
	W0721 17:18:39.896037    6289 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:39.904734    6289 out.go:177] 
	W0721 17:18:39.910802    6289 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:18:39.910829    6289 out.go:239] * 
	* 
	W0721 17:18:39.913358    6289 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:18:39.921710    6289 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.729923292s)

                                                
                                                
-- stdout --
	* [flannel-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-396000" primary control-plane node in "flannel-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:18:42.247964    6404 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:18:42.248244    6404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:42.248248    6404 out.go:304] Setting ErrFile to fd 2...
	I0721 17:18:42.248250    6404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:42.248379    6404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:18:42.249683    6404 out.go:298] Setting JSON to false
	I0721 17:18:42.266305    6404 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4685,"bootTime":1721602837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:18:42.266385    6404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:18:42.271548    6404 out.go:177] * [flannel-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:18:42.279547    6404 notify.go:220] Checking for updates...
	I0721 17:18:42.281472    6404 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:18:42.288496    6404 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:18:42.291586    6404 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:18:42.292952    6404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:18:42.295517    6404 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:18:42.298607    6404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:18:42.301947    6404 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:18:42.302015    6404 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:18:42.302062    6404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:18:42.306485    6404 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:18:42.313564    6404 start.go:297] selected driver: qemu2
	I0721 17:18:42.313570    6404 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:18:42.313577    6404 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:18:42.315900    6404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:18:42.318518    6404 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:18:42.321636    6404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:18:42.321666    6404 cni.go:84] Creating CNI manager for "flannel"
	I0721 17:18:42.321672    6404 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0721 17:18:42.321698    6404 start.go:340] cluster config:
	{Name:flannel-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:18:42.325159    6404 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:18:42.332522    6404 out.go:177] * Starting "flannel-396000" primary control-plane node in "flannel-396000" cluster
	I0721 17:18:42.336575    6404 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:18:42.336587    6404 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:18:42.336595    6404 cache.go:56] Caching tarball of preloaded images
	I0721 17:18:42.336645    6404 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:18:42.336649    6404 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:18:42.336703    6404 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/flannel-396000/config.json ...
	I0721 17:18:42.336714    6404 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/flannel-396000/config.json: {Name:mke8bdd029c307ffb82d496f16939971d4ea070e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:18:42.336911    6404 start.go:360] acquireMachinesLock for flannel-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:42.336941    6404 start.go:364] duration metric: took 24.958µs to acquireMachinesLock for "flannel-396000"
	I0721 17:18:42.336951    6404 start.go:93] Provisioning new machine with config: &{Name:flannel-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:42.336979    6404 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:42.345566    6404 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:42.360533    6404 start.go:159] libmachine.API.Create for "flannel-396000" (driver="qemu2")
	I0721 17:18:42.360559    6404 client.go:168] LocalClient.Create starting
	I0721 17:18:42.360619    6404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:42.360654    6404 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:42.360662    6404 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:42.360705    6404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:42.360730    6404 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:42.360741    6404 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:42.361087    6404 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:42.504127    6404 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:42.551990    6404 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:42.551999    6404 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:42.552210    6404 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2
	I0721 17:18:42.562283    6404 main.go:141] libmachine: STDOUT: 
	I0721 17:18:42.562313    6404 main.go:141] libmachine: STDERR: 
	I0721 17:18:42.562397    6404 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2 +20000M
	I0721 17:18:42.571668    6404 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:42.571696    6404 main.go:141] libmachine: STDERR: 
	I0721 17:18:42.571743    6404 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2
	I0721 17:18:42.571755    6404 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:42.571772    6404 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:42.571803    6404 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4d:dd:89:47:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2
	I0721 17:18:42.573896    6404 main.go:141] libmachine: STDOUT: 
	I0721 17:18:42.573913    6404 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:42.573931    6404 client.go:171] duration metric: took 213.374292ms to LocalClient.Create
	I0721 17:18:44.574684    6404 start.go:128] duration metric: took 2.237750791s to createHost
	I0721 17:18:44.574724    6404 start.go:83] releasing machines lock for "flannel-396000", held for 2.2378385s
	W0721 17:18:44.574788    6404 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:44.585372    6404 out.go:177] * Deleting "flannel-396000" in qemu2 ...
	W0721 17:18:44.607991    6404 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:44.608006    6404 start.go:729] Will try again in 5 seconds ...
	I0721 17:18:49.610115    6404 start.go:360] acquireMachinesLock for flannel-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:49.610756    6404 start.go:364] duration metric: took 512.541µs to acquireMachinesLock for "flannel-396000"
	I0721 17:18:49.610928    6404 start.go:93] Provisioning new machine with config: &{Name:flannel-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:49.611320    6404 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:49.616044    6404 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:49.664998    6404 start.go:159] libmachine.API.Create for "flannel-396000" (driver="qemu2")
	I0721 17:18:49.665053    6404 client.go:168] LocalClient.Create starting
	I0721 17:18:49.665187    6404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:49.665247    6404 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:49.665264    6404 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:49.665335    6404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:49.665381    6404 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:49.665395    6404 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:49.665982    6404 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:49.815733    6404 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:49.885406    6404 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:49.885412    6404 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:49.885574    6404 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2
	I0721 17:18:49.894992    6404 main.go:141] libmachine: STDOUT: 
	I0721 17:18:49.895010    6404 main.go:141] libmachine: STDERR: 
	I0721 17:18:49.895067    6404 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2 +20000M
	I0721 17:18:49.902874    6404 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:49.902889    6404 main.go:141] libmachine: STDERR: 
	I0721 17:18:49.902908    6404 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2
	I0721 17:18:49.902914    6404 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:49.902928    6404 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:49.902959    6404 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:10:03:78:c0:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/flannel-396000/disk.qcow2
	I0721 17:18:49.904632    6404 main.go:141] libmachine: STDOUT: 
	I0721 17:18:49.904649    6404 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:49.904661    6404 client.go:171] duration metric: took 239.61025ms to LocalClient.Create
	I0721 17:18:51.906809    6404 start.go:128] duration metric: took 2.295486375s to createHost
	I0721 17:18:51.906882    6404 start.go:83] releasing machines lock for "flannel-396000", held for 2.296163584s
	W0721 17:18:51.907446    6404 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:51.920159    6404 out.go:177] 
	W0721 17:18:51.923298    6404 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:18:51.923351    6404 out.go:239] * 
	* 
	W0721 17:18:51.926247    6404 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:18:51.937002    6404 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.030559542s)

                                                
                                                
-- stdout --
	* [enable-default-cni-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-396000" primary control-plane node in "enable-default-cni-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:18:54.307174    6525 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:18:54.307318    6525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:54.307321    6525 out.go:304] Setting ErrFile to fd 2...
	I0721 17:18:54.307323    6525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:18:54.307482    6525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:18:54.308568    6525 out.go:298] Setting JSON to false
	I0721 17:18:54.325187    6525 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4697,"bootTime":1721602837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:18:54.325253    6525 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:18:54.330187    6525 out.go:177] * [enable-default-cni-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:18:54.338185    6525 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:18:54.338253    6525 notify.go:220] Checking for updates...
	I0721 17:18:54.346053    6525 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:18:54.349132    6525 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:18:54.352187    6525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:18:54.355140    6525 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:18:54.358150    6525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:18:54.361481    6525 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:18:54.361547    6525 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:18:54.361595    6525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:18:54.366039    6525 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:18:54.373123    6525 start.go:297] selected driver: qemu2
	I0721 17:18:54.373129    6525 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:18:54.373134    6525 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:18:54.375522    6525 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:18:54.380019    6525 out.go:177] * Automatically selected the socket_vmnet network
	E0721 17:18:54.383518    6525 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0721 17:18:54.383541    6525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:18:54.383559    6525 cni.go:84] Creating CNI manager for "bridge"
	I0721 17:18:54.383563    6525 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:18:54.383591    6525 start.go:340] cluster config:
	{Name:enable-default-cni-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:18:54.387465    6525 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:18:54.395085    6525 out.go:177] * Starting "enable-default-cni-396000" primary control-plane node in "enable-default-cni-396000" cluster
	I0721 17:18:54.399068    6525 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:18:54.399084    6525 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:18:54.399097    6525 cache.go:56] Caching tarball of preloaded images
	I0721 17:18:54.399159    6525 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:18:54.399165    6525 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:18:54.399240    6525 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/enable-default-cni-396000/config.json ...
	I0721 17:18:54.399252    6525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/enable-default-cni-396000/config.json: {Name:mk4181479a8d9aedf8ff7ceb101028714174ad2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:18:54.399467    6525 start.go:360] acquireMachinesLock for enable-default-cni-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:18:54.399501    6525 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "enable-default-cni-396000"
	I0721 17:18:54.399511    6525 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:18:54.399536    6525 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:18:54.407081    6525 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:18:54.423841    6525 start.go:159] libmachine.API.Create for "enable-default-cni-396000" (driver="qemu2")
	I0721 17:18:54.423875    6525 client.go:168] LocalClient.Create starting
	I0721 17:18:54.423944    6525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:18:54.423975    6525 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:54.423984    6525 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:54.424021    6525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:18:54.424044    6525 main.go:141] libmachine: Decoding PEM data...
	I0721 17:18:54.424053    6525 main.go:141] libmachine: Parsing certificate...
	I0721 17:18:54.424417    6525 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:18:54.565032    6525 main.go:141] libmachine: Creating SSH key...
	I0721 17:18:54.750531    6525 main.go:141] libmachine: Creating Disk image...
	I0721 17:18:54.750539    6525 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:18:54.750766    6525 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2
	I0721 17:18:54.760627    6525 main.go:141] libmachine: STDOUT: 
	I0721 17:18:54.760646    6525 main.go:141] libmachine: STDERR: 
	I0721 17:18:54.760698    6525 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2 +20000M
	I0721 17:18:54.768667    6525 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:18:54.768687    6525 main.go:141] libmachine: STDERR: 
	I0721 17:18:54.768708    6525 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2
	I0721 17:18:54.768716    6525 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:18:54.768728    6525 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:18:54.768757    6525 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ef:55:cc:49:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2
	I0721 17:18:54.770577    6525 main.go:141] libmachine: STDOUT: 
	I0721 17:18:54.770591    6525 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:18:54.770608    6525 client.go:171] duration metric: took 346.739291ms to LocalClient.Create
	I0721 17:18:56.772955    6525 start.go:128] duration metric: took 2.37343425s to createHost
	I0721 17:18:56.773095    6525 start.go:83] releasing machines lock for "enable-default-cni-396000", held for 2.373649625s
	W0721 17:18:56.773160    6525 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:56.788469    6525 out.go:177] * Deleting "enable-default-cni-396000" in qemu2 ...
	W0721 17:18:56.813645    6525 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:18:56.813679    6525 start.go:729] Will try again in 5 seconds ...
	I0721 17:19:01.815799    6525 start.go:360] acquireMachinesLock for enable-default-cni-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:01.816369    6525 start.go:364] duration metric: took 425.125µs to acquireMachinesLock for "enable-default-cni-396000"
	I0721 17:19:01.816452    6525 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:01.816798    6525 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:01.826290    6525 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:19:01.865919    6525 start.go:159] libmachine.API.Create for "enable-default-cni-396000" (driver="qemu2")
	I0721 17:19:01.865980    6525 client.go:168] LocalClient.Create starting
	I0721 17:19:01.866091    6525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:01.866159    6525 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:01.866173    6525 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:01.866232    6525 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:01.866271    6525 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:01.866286    6525 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:01.866784    6525 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:02.030647    6525 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:02.251352    6525 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:02.251365    6525 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:02.251555    6525 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2
	I0721 17:19:02.261468    6525 main.go:141] libmachine: STDOUT: 
	I0721 17:19:02.261491    6525 main.go:141] libmachine: STDERR: 
	I0721 17:19:02.261549    6525 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2 +20000M
	I0721 17:19:02.269918    6525 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:02.269933    6525 main.go:141] libmachine: STDERR: 
	I0721 17:19:02.269946    6525 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2
	I0721 17:19:02.269953    6525 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:02.269965    6525 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:02.270003    6525 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:69:37:0b:14:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/enable-default-cni-396000/disk.qcow2
	I0721 17:19:02.271788    6525 main.go:141] libmachine: STDOUT: 
	I0721 17:19:02.271802    6525 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:02.271815    6525 client.go:171] duration metric: took 405.841583ms to LocalClient.Create
	I0721 17:19:04.273929    6525 start.go:128] duration metric: took 2.457149458s to createHost
	I0721 17:19:04.273985    6525 start.go:83] releasing machines lock for "enable-default-cni-396000", held for 2.457661709s
	W0721 17:19:04.274304    6525 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:04.283694    6525 out.go:177] 
	W0721 17:19:04.288926    6525 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:19:04.288942    6525 out.go:239] * 
	* 
	W0721 17:19:04.290296    6525 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:19:04.300808    6525 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.820395542s)

                                                
                                                
-- stdout --
	* [bridge-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-396000" primary control-plane node in "bridge-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:19:06.489216    6636 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:19:06.489341    6636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:06.489344    6636 out.go:304] Setting ErrFile to fd 2...
	I0721 17:19:06.489347    6636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:06.489481    6636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:19:06.490592    6636 out.go:298] Setting JSON to false
	I0721 17:19:06.507901    6636 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4709,"bootTime":1721602837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:19:06.507971    6636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:19:06.513650    6636 out.go:177] * [bridge-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:19:06.521507    6636 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:19:06.521553    6636 notify.go:220] Checking for updates...
	I0721 17:19:06.532481    6636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:19:06.535559    6636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:19:06.538563    6636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:19:06.541526    6636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:19:06.544542    6636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:19:06.547850    6636 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:19:06.547926    6636 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:19:06.547986    6636 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:19:06.552524    6636 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:19:06.567123    6636 start.go:297] selected driver: qemu2
	I0721 17:19:06.567130    6636 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:19:06.567136    6636 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:19:06.569628    6636 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:19:06.572507    6636 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:19:06.575633    6636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:19:06.575665    6636 cni.go:84] Creating CNI manager for "bridge"
	I0721 17:19:06.575669    6636 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:19:06.575703    6636 start.go:340] cluster config:
	{Name:bridge-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:19:06.579594    6636 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:06.585458    6636 out.go:177] * Starting "bridge-396000" primary control-plane node in "bridge-396000" cluster
	I0721 17:19:06.589525    6636 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:19:06.589539    6636 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:19:06.589549    6636 cache.go:56] Caching tarball of preloaded images
	I0721 17:19:06.589608    6636 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:19:06.589614    6636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:19:06.589685    6636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/bridge-396000/config.json ...
	I0721 17:19:06.589698    6636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/bridge-396000/config.json: {Name:mk8a9e285476bee7cf21673bc91dbebf8b3b0f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:19:06.589927    6636 start.go:360] acquireMachinesLock for bridge-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:06.589963    6636 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "bridge-396000"
	I0721 17:19:06.589974    6636 start.go:93] Provisioning new machine with config: &{Name:bridge-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:06.590001    6636 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:06.598511    6636 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:19:06.616971    6636 start.go:159] libmachine.API.Create for "bridge-396000" (driver="qemu2")
	I0721 17:19:06.617005    6636 client.go:168] LocalClient.Create starting
	I0721 17:19:06.617094    6636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:06.617126    6636 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:06.617136    6636 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:06.617181    6636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:06.617205    6636 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:06.617217    6636 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:06.617595    6636 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:06.761094    6636 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:06.846783    6636 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:06.846788    6636 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:06.846965    6636 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2
	I0721 17:19:06.856085    6636 main.go:141] libmachine: STDOUT: 
	I0721 17:19:06.856104    6636 main.go:141] libmachine: STDERR: 
	I0721 17:19:06.856160    6636 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2 +20000M
	I0721 17:19:06.864085    6636 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:06.864099    6636 main.go:141] libmachine: STDERR: 
	I0721 17:19:06.864113    6636 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2
	I0721 17:19:06.864126    6636 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:06.864137    6636 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:06.864163    6636 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:71:bc:05:83:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2
	I0721 17:19:06.865852    6636 main.go:141] libmachine: STDOUT: 
	I0721 17:19:06.865865    6636 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:06.865881    6636 client.go:171] duration metric: took 248.879792ms to LocalClient.Create
	I0721 17:19:08.868031    6636 start.go:128] duration metric: took 2.278060417s to createHost
	I0721 17:19:08.868104    6636 start.go:83] releasing machines lock for "bridge-396000", held for 2.278194208s
	W0721 17:19:08.868201    6636 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:08.880474    6636 out.go:177] * Deleting "bridge-396000" in qemu2 ...
	W0721 17:19:08.905826    6636 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:08.905866    6636 start.go:729] Will try again in 5 seconds ...
	I0721 17:19:13.908083    6636 start.go:360] acquireMachinesLock for bridge-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:13.908684    6636 start.go:364] duration metric: took 488.833µs to acquireMachinesLock for "bridge-396000"
	I0721 17:19:13.908836    6636 start.go:93] Provisioning new machine with config: &{Name:bridge-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:13.909135    6636 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:13.917756    6636 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:19:13.968460    6636 start.go:159] libmachine.API.Create for "bridge-396000" (driver="qemu2")
	I0721 17:19:13.968524    6636 client.go:168] LocalClient.Create starting
	I0721 17:19:13.968650    6636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:13.968723    6636 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:13.968738    6636 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:13.968845    6636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:13.968917    6636 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:13.968931    6636 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:13.969458    6636 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:14.119770    6636 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:14.213271    6636 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:14.213282    6636 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:14.213483    6636 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2
	I0721 17:19:14.223939    6636 main.go:141] libmachine: STDOUT: 
	I0721 17:19:14.223966    6636 main.go:141] libmachine: STDERR: 
	I0721 17:19:14.224053    6636 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2 +20000M
	I0721 17:19:14.233547    6636 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:14.233567    6636 main.go:141] libmachine: STDERR: 
	I0721 17:19:14.233581    6636 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2
	I0721 17:19:14.233597    6636 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:14.233609    6636 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:14.233638    6636 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c7:81:6f:c7:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/bridge-396000/disk.qcow2
	I0721 17:19:14.235747    6636 main.go:141] libmachine: STDOUT: 
	I0721 17:19:14.235764    6636 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:14.235783    6636 client.go:171] duration metric: took 267.258542ms to LocalClient.Create
	I0721 17:19:16.237935    6636 start.go:128] duration metric: took 2.32882975s to createHost
	I0721 17:19:16.238007    6636 start.go:83] releasing machines lock for "bridge-396000", held for 2.329364292s
	W0721 17:19:16.238357    6636 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:16.246890    6636 out.go:177] 
	W0721 17:19:16.251122    6636 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:19:16.251169    6636 out.go:239] * 
	* 
	W0721 17:19:16.253174    6636 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:19:16.264059    6636 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-396000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.810038916s)

                                                
                                                
-- stdout --
	* [kubenet-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-396000" primary control-plane node in "kubenet-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:19:18.428727    6750 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:19:18.428855    6750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:18.428858    6750 out.go:304] Setting ErrFile to fd 2...
	I0721 17:19:18.428860    6750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:18.428972    6750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:19:18.430096    6750 out.go:298] Setting JSON to false
	I0721 17:19:18.446375    6750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4721,"bootTime":1721602837,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:19:18.446442    6750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:19:18.450991    6750 out.go:177] * [kubenet-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:19:18.457979    6750 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:19:18.458059    6750 notify.go:220] Checking for updates...
	I0721 17:19:18.466001    6750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:19:18.469934    6750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:19:18.472974    6750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:19:18.476048    6750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:19:18.478941    6750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:19:18.482263    6750 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:19:18.482328    6750 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:19:18.482391    6750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:19:18.485983    6750 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:19:18.492917    6750 start.go:297] selected driver: qemu2
	I0721 17:19:18.492923    6750 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:19:18.492929    6750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:19:18.495241    6750 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:19:18.497986    6750 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:19:18.501067    6750 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:19:18.501086    6750 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0721 17:19:18.501131    6750 start.go:340] cluster config:
	{Name:kubenet-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:19:18.504992    6750 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:18.511963    6750 out.go:177] * Starting "kubenet-396000" primary control-plane node in "kubenet-396000" cluster
	I0721 17:19:18.515870    6750 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:19:18.515888    6750 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:19:18.515899    6750 cache.go:56] Caching tarball of preloaded images
	I0721 17:19:18.515950    6750 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:19:18.515955    6750 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:19:18.516008    6750 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/kubenet-396000/config.json ...
	I0721 17:19:18.516020    6750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/kubenet-396000/config.json: {Name:mk0b5b09f0b47020f7f933c935067115adcb88d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:19:18.516331    6750 start.go:360] acquireMachinesLock for kubenet-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:18.516363    6750 start.go:364] duration metric: took 26.75µs to acquireMachinesLock for "kubenet-396000"
	I0721 17:19:18.516373    6750 start.go:93] Provisioning new machine with config: &{Name:kubenet-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:18.516396    6750 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:18.518253    6750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:19:18.533440    6750 start.go:159] libmachine.API.Create for "kubenet-396000" (driver="qemu2")
	I0721 17:19:18.533468    6750 client.go:168] LocalClient.Create starting
	I0721 17:19:18.533521    6750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:18.533555    6750 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:18.533564    6750 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:18.533607    6750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:18.533630    6750 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:18.533638    6750 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:18.534023    6750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:18.674510    6750 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:18.791916    6750 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:18.791922    6750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:18.792083    6750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2
	I0721 17:19:18.801421    6750 main.go:141] libmachine: STDOUT: 
	I0721 17:19:18.801441    6750 main.go:141] libmachine: STDERR: 
	I0721 17:19:18.801485    6750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2 +20000M
	I0721 17:19:18.809571    6750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:18.809587    6750 main.go:141] libmachine: STDERR: 
	I0721 17:19:18.809600    6750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2
	I0721 17:19:18.809606    6750 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:18.809620    6750 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:18.809653    6750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:9e:dd:44:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2
	I0721 17:19:18.811329    6750 main.go:141] libmachine: STDOUT: 
	I0721 17:19:18.811345    6750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:18.811363    6750 client.go:171] duration metric: took 277.899417ms to LocalClient.Create
	I0721 17:19:20.813513    6750 start.go:128] duration metric: took 2.297150958s to createHost
	I0721 17:19:20.813581    6750 start.go:83] releasing machines lock for "kubenet-396000", held for 2.297271584s
	W0721 17:19:20.813778    6750 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:20.827658    6750 out.go:177] * Deleting "kubenet-396000" in qemu2 ...
	W0721 17:19:20.847811    6750 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:20.847834    6750 start.go:729] Will try again in 5 seconds ...
	I0721 17:19:25.849251    6750 start.go:360] acquireMachinesLock for kubenet-396000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:25.849771    6750 start.go:364] duration metric: took 424.25µs to acquireMachinesLock for "kubenet-396000"
	I0721 17:19:25.849888    6750 start.go:93] Provisioning new machine with config: &{Name:kubenet-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:25.850098    6750 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:25.859346    6750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0721 17:19:25.901429    6750 start.go:159] libmachine.API.Create for "kubenet-396000" (driver="qemu2")
	I0721 17:19:25.901475    6750 client.go:168] LocalClient.Create starting
	I0721 17:19:25.901579    6750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:25.901657    6750 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:25.901673    6750 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:25.901734    6750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:25.901778    6750 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:25.901787    6750 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:25.902276    6750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:26.052629    6750 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:26.152919    6750 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:26.152928    6750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:26.153108    6750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2
	I0721 17:19:26.162700    6750 main.go:141] libmachine: STDOUT: 
	I0721 17:19:26.162716    6750 main.go:141] libmachine: STDERR: 
	I0721 17:19:26.162784    6750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2 +20000M
	I0721 17:19:26.170777    6750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:26.170792    6750 main.go:141] libmachine: STDERR: 
	I0721 17:19:26.170804    6750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2
	I0721 17:19:26.170809    6750 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:26.170818    6750 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:26.170855    6750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:78:b8:e5:aa:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/kubenet-396000/disk.qcow2
	I0721 17:19:26.172536    6750 main.go:141] libmachine: STDOUT: 
	I0721 17:19:26.172549    6750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:26.172560    6750 client.go:171] duration metric: took 271.0885ms to LocalClient.Create
	I0721 17:19:28.174763    6750 start.go:128] duration metric: took 2.324683917s to createHost
	I0721 17:19:28.174839    6750 start.go:83] releasing machines lock for "kubenet-396000", held for 2.32511s
	W0721 17:19:28.175305    6750 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:28.184891    6750 out.go:177] 
	W0721 17:19:28.189039    6750 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:19:28.189062    6750 out.go:239] * 
	* 
	W0721 17:19:28.190937    6750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:19:28.198893    6750 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.866289125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-749000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-749000" primary control-plane node in "old-k8s-version-749000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-749000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:19:30.548881    6861 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:19:30.549151    6861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:30.549155    6861 out.go:304] Setting ErrFile to fd 2...
	I0721 17:19:30.549157    6861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:30.549305    6861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:19:30.550598    6861 out.go:298] Setting JSON to false
	I0721 17:19:30.568107    6861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4733,"bootTime":1721602837,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:19:30.568187    6861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:19:30.571696    6861 out.go:177] * [old-k8s-version-749000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:19:30.578718    6861 notify.go:220] Checking for updates...
	I0721 17:19:30.582711    6861 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:19:30.590708    6861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:19:30.598694    6861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:19:30.608719    6861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:19:30.612650    6861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:19:30.615659    6861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:19:30.621015    6861 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:19:30.621083    6861 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:19:30.621126    6861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:19:30.624692    6861 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:19:30.631631    6861 start.go:297] selected driver: qemu2
	I0721 17:19:30.631642    6861 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:19:30.631649    6861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:19:30.634024    6861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:19:30.637676    6861 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:19:30.640726    6861 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:19:30.640763    6861 cni.go:84] Creating CNI manager for ""
	I0721 17:19:30.640771    6861 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0721 17:19:30.640804    6861 start.go:340] cluster config:
	{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:19:30.644601    6861 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:30.650551    6861 out.go:177] * Starting "old-k8s-version-749000" primary control-plane node in "old-k8s-version-749000" cluster
	I0721 17:19:30.654667    6861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 17:19:30.654681    6861 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0721 17:19:30.654692    6861 cache.go:56] Caching tarball of preloaded images
	I0721 17:19:30.654741    6861 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:19:30.654746    6861 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0721 17:19:30.654807    6861 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/old-k8s-version-749000/config.json ...
	I0721 17:19:30.654819    6861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/old-k8s-version-749000/config.json: {Name:mk3cf70b2b89360f6137073889d1103820664293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:19:30.655032    6861 start.go:360] acquireMachinesLock for old-k8s-version-749000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:30.655064    6861 start.go:364] duration metric: took 25.709µs to acquireMachinesLock for "old-k8s-version-749000"
	I0721 17:19:30.655074    6861 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:30.655105    6861 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:30.662630    6861 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:19:30.678954    6861 start.go:159] libmachine.API.Create for "old-k8s-version-749000" (driver="qemu2")
	I0721 17:19:30.678990    6861 client.go:168] LocalClient.Create starting
	I0721 17:19:30.679060    6861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:30.679104    6861 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:30.679112    6861 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:30.679152    6861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:30.679177    6861 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:30.679186    6861 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:30.679571    6861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:30.820197    6861 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:30.973709    6861 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:30.973715    6861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:30.973898    6861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0721 17:19:30.983130    6861 main.go:141] libmachine: STDOUT: 
	I0721 17:19:30.983152    6861 main.go:141] libmachine: STDERR: 
	I0721 17:19:30.983206    6861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2 +20000M
	I0721 17:19:30.991212    6861 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:30.991228    6861 main.go:141] libmachine: STDERR: 
	I0721 17:19:30.991241    6861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0721 17:19:30.991247    6861 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:30.991258    6861 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:30.991281    6861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:a6:75:e6:49:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0721 17:19:30.993076    6861 main.go:141] libmachine: STDOUT: 
	I0721 17:19:30.993095    6861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:30.993115    6861 client.go:171] duration metric: took 314.130958ms to LocalClient.Create
	I0721 17:19:32.995162    6861 start.go:128] duration metric: took 2.340109834s to createHost
	I0721 17:19:32.995223    6861 start.go:83] releasing machines lock for "old-k8s-version-749000", held for 2.340218167s
	W0721 17:19:32.995264    6861 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:33.011398    6861 out.go:177] * Deleting "old-k8s-version-749000" in qemu2 ...
	W0721 17:19:33.028959    6861 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:33.028974    6861 start.go:729] Will try again in 5 seconds ...
	I0721 17:19:38.031012    6861 start.go:360] acquireMachinesLock for old-k8s-version-749000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:38.031312    6861 start.go:364] duration metric: took 244.209µs to acquireMachinesLock for "old-k8s-version-749000"
	I0721 17:19:38.031386    6861 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:38.031521    6861 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:38.035837    6861 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:19:38.067908    6861 start.go:159] libmachine.API.Create for "old-k8s-version-749000" (driver="qemu2")
	I0721 17:19:38.067954    6861 client.go:168] LocalClient.Create starting
	I0721 17:19:38.068073    6861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:38.068123    6861 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:38.068135    6861 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:38.068181    6861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:38.068216    6861 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:38.068229    6861 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:38.068647    6861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:38.214258    6861 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:38.325385    6861 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:38.325396    6861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:38.325689    6861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0721 17:19:38.334880    6861 main.go:141] libmachine: STDOUT: 
	I0721 17:19:38.334898    6861 main.go:141] libmachine: STDERR: 
	I0721 17:19:38.334947    6861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2 +20000M
	I0721 17:19:38.342857    6861 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:38.342873    6861 main.go:141] libmachine: STDERR: 
	I0721 17:19:38.342884    6861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0721 17:19:38.342887    6861 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:38.342902    6861 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:38.342930    6861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1f:ec:97:90:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0721 17:19:38.344585    6861 main.go:141] libmachine: STDOUT: 
	I0721 17:19:38.344601    6861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:38.344613    6861 client.go:171] duration metric: took 276.662ms to LocalClient.Create
	I0721 17:19:40.346761    6861 start.go:128] duration metric: took 2.3152765s to createHost
	I0721 17:19:40.346864    6861 start.go:83] releasing machines lock for "old-k8s-version-749000", held for 2.3155955s
	W0721 17:19:40.347496    6861 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-749000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-749000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:40.357003    6861 out.go:177] 
	W0721 17:19:40.361115    6861 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:19:40.361184    6861 out.go:239] * 
	* 
	W0721 17:19:40.364318    6861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:19:40.371036    6861 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (63.630834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-749000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-749000 create -f testdata/busybox.yaml: exit status 1 (29.899667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-749000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-749000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (29.59975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (29.706875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-749000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-749000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-749000 describe deploy/metrics-server -n kube-system: exit status 1 (27.345041ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-749000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-749000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (29.04125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.17904425s)

                                                
                                                
-- stdout --
	* [old-k8s-version-749000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-749000" primary control-plane node in "old-k8s-version-749000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-749000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-749000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:19:44.711557    6916 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:19:44.711670    6916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:44.711673    6916 out.go:304] Setting ErrFile to fd 2...
	I0721 17:19:44.711675    6916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:44.711809    6916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:19:44.712808    6916 out.go:298] Setting JSON to false
	I0721 17:19:44.729538    6916 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4747,"bootTime":1721602837,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:19:44.729605    6916 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:19:44.731909    6916 out.go:177] * [old-k8s-version-749000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:19:44.739280    6916 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:19:44.739377    6916 notify.go:220] Checking for updates...
	I0721 17:19:44.745235    6916 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:19:44.748214    6916 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:19:44.749670    6916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:19:44.752223    6916 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:19:44.755213    6916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:19:44.758590    6916 config.go:182] Loaded profile config "old-k8s-version-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0721 17:19:44.762151    6916 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0721 17:19:44.765210    6916 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:19:44.769216    6916 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:19:44.776161    6916 start.go:297] selected driver: qemu2
	I0721 17:19:44.776167    6916 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:19:44.776218    6916 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:19:44.778585    6916 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:19:44.778611    6916 cni.go:84] Creating CNI manager for ""
	I0721 17:19:44.778618    6916 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0721 17:19:44.778645    6916 start.go:340] cluster config:
	{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:19:44.782178    6916 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:44.789183    6916 out.go:177] * Starting "old-k8s-version-749000" primary control-plane node in "old-k8s-version-749000" cluster
	I0721 17:19:44.793230    6916 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 17:19:44.793243    6916 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0721 17:19:44.793257    6916 cache.go:56] Caching tarball of preloaded images
	I0721 17:19:44.793319    6916 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:19:44.793325    6916 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0721 17:19:44.793380    6916 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/old-k8s-version-749000/config.json ...
	I0721 17:19:44.793845    6916 start.go:360] acquireMachinesLock for old-k8s-version-749000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:44.793874    6916 start.go:364] duration metric: took 23.625µs to acquireMachinesLock for "old-k8s-version-749000"
	I0721 17:19:44.793884    6916 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:19:44.793890    6916 fix.go:54] fixHost starting: 
	I0721 17:19:44.794015    6916 fix.go:112] recreateIfNeeded on old-k8s-version-749000: state=Stopped err=<nil>
	W0721 17:19:44.794023    6916 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:19:44.798194    6916 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-749000" ...
	I0721 17:19:44.806142    6916 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:44.806182    6916 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1f:ec:97:90:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0721 17:19:44.808240    6916 main.go:141] libmachine: STDOUT: 
	I0721 17:19:44.808260    6916 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:44.808288    6916 fix.go:56] duration metric: took 14.397875ms for fixHost
	I0721 17:19:44.808293    6916 start.go:83] releasing machines lock for "old-k8s-version-749000", held for 14.414042ms
	W0721 17:19:44.808299    6916 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:19:44.808345    6916 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:44.808351    6916 start.go:729] Will try again in 5 seconds ...
	I0721 17:19:49.810517    6916 start.go:360] acquireMachinesLock for old-k8s-version-749000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:49.811066    6916 start.go:364] duration metric: took 427.208µs to acquireMachinesLock for "old-k8s-version-749000"
	I0721 17:19:49.811237    6916 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:19:49.811252    6916 fix.go:54] fixHost starting: 
	I0721 17:19:49.811731    6916 fix.go:112] recreateIfNeeded on old-k8s-version-749000: state=Stopped err=<nil>
	W0721 17:19:49.811749    6916 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:19:49.820304    6916 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-749000" ...
	I0721 17:19:49.824268    6916 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:49.824419    6916 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1f:ec:97:90:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0721 17:19:49.831305    6916 main.go:141] libmachine: STDOUT: 
	I0721 17:19:49.831352    6916 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:49.831417    6916 fix.go:56] duration metric: took 20.166625ms for fixHost
	I0721 17:19:49.831432    6916 start.go:83] releasing machines lock for "old-k8s-version-749000", held for 20.349083ms
	W0721 17:19:49.831555    6916 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:49.839407    6916 out.go:177] 
	W0721 17:19:49.843343    6916 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:19:49.843370    6916 out.go:239] * 
	* 
	W0721 17:19:49.844486    6916 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:19:49.851276    6916 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (58.183708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-749000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (29.971333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-749000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-749000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-749000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.990583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-749000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-749000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (28.154084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-749000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (29.886083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-749000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-749000 --alsologtostderr -v=1: exit status 83 (38.495958ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-749000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:19:50.108933    6939 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:19:50.109336    6939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:50.109339    6939 out.go:304] Setting ErrFile to fd 2...
	I0721 17:19:50.109342    6939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:50.109466    6939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:19:50.109670    6939 out.go:298] Setting JSON to false
	I0721 17:19:50.109677    6939 mustload.go:65] Loading cluster: old-k8s-version-749000
	I0721 17:19:50.109856    6939 config.go:182] Loaded profile config "old-k8s-version-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0721 17:19:50.112689    6939 out.go:177] * The control-plane node old-k8s-version-749000 host is not running: state=Stopped
	I0721 17:19:50.115740    6939 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-749000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-749000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (28.556625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (28.669834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-980000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-980000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.793801041s)

                                                
                                                
-- stdout --
	* [no-preload-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-980000" primary control-plane node in "no-preload-980000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-980000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:19:50.418426    6956 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:19:50.418588    6956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:50.418592    6956 out.go:304] Setting ErrFile to fd 2...
	I0721 17:19:50.418594    6956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:19:50.418717    6956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:19:50.419780    6956 out.go:298] Setting JSON to false
	I0721 17:19:50.436269    6956 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4753,"bootTime":1721602837,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:19:50.436369    6956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:19:50.440980    6956 out.go:177] * [no-preload-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:19:50.448050    6956 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:19:50.448099    6956 notify.go:220] Checking for updates...
	I0721 17:19:50.454938    6956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:19:50.458015    6956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:19:50.460919    6956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:19:50.463944    6956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:19:50.467011    6956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:19:50.470251    6956 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:19:50.470314    6956 config.go:182] Loaded profile config "stopped-upgrade-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0721 17:19:50.470372    6956 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:19:50.473904    6956 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:19:50.479967    6956 start.go:297] selected driver: qemu2
	I0721 17:19:50.479974    6956 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:19:50.479980    6956 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:19:50.482345    6956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:19:50.485895    6956 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:19:50.488984    6956 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:19:50.489000    6956 cni.go:84] Creating CNI manager for ""
	I0721 17:19:50.489005    6956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:19:50.489008    6956 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:19:50.489035    6956 start.go:340] cluster config:
	{Name:no-preload-980000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:19:50.492653    6956 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.499985    6956 out.go:177] * Starting "no-preload-980000" primary control-plane node in "no-preload-980000" cluster
	I0721 17:19:50.503915    6956 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 17:19:50.504002    6956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/no-preload-980000/config.json ...
	I0721 17:19:50.504032    6956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/no-preload-980000/config.json: {Name:mk3e3f6222293aa17af112611d8ee95a63efd8c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:19:50.504042    6956 cache.go:107] acquiring lock: {Name:mk2324657e2398ee084d01833edfc8d0e662b64a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.504044    6956 cache.go:107] acquiring lock: {Name:mk23e1a5adc8052546ad5ee221d04394b7657d40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.504079    6956 cache.go:107] acquiring lock: {Name:mkbde0065602fbd4fbb10d69a8d2decc9d903ef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.504122    6956 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0721 17:19:50.504130    6956 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 87.708µs
	I0721 17:19:50.504137    6956 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0721 17:19:50.504143    6956 cache.go:107] acquiring lock: {Name:mka0846d702e11f6e91afa564b60c38b2ec2c668 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.504159    6956 cache.go:107] acquiring lock: {Name:mk9ee8f02d7104c1dcfb77f567f7c814141afa3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.504229    6956 cache.go:107] acquiring lock: {Name:mkecbbc39f47cb3d63d2278731a93d249c8d9718 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.504212    6956 cache.go:107] acquiring lock: {Name:mk9bb7ee73fd6505463742802170aef00a12b50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.504269    6956 cache.go:107] acquiring lock: {Name:mk9665d858bd0c1e7fbba239990d7a27712281ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:19:50.504300    6956 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0721 17:19:50.504310    6956 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0721 17:19:50.504330    6956 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0721 17:19:50.504363    6956 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0721 17:19:50.504368    6956 start.go:360] acquireMachinesLock for no-preload-980000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:50.504381    6956 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0721 17:19:50.504413    6956 start.go:364] duration metric: took 40.875µs to acquireMachinesLock for "no-preload-980000"
	I0721 17:19:50.504445    6956 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0721 17:19:50.504423    6956 start.go:93] Provisioning new machine with config: &{Name:no-preload-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:50.504458    6956 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:50.504564    6956 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0721 17:19:50.511859    6956 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:19:50.515432    6956 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0721 17:19:50.515543    6956 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0721 17:19:50.515931    6956 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0721 17:19:50.518037    6956 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0721 17:19:50.518127    6956 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0721 17:19:50.518208    6956 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0721 17:19:50.518264    6956 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0721 17:19:50.527446    6956 start.go:159] libmachine.API.Create for "no-preload-980000" (driver="qemu2")
	I0721 17:19:50.527470    6956 client.go:168] LocalClient.Create starting
	I0721 17:19:50.527537    6956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:50.527567    6956 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:50.527575    6956 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:50.527613    6956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:50.527643    6956 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:50.527650    6956 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:50.527962    6956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:50.674763    6956 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:50.834594    6956 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:50.834612    6956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:50.834795    6956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2
	I0721 17:19:50.844027    6956 main.go:141] libmachine: STDOUT: 
	I0721 17:19:50.844043    6956 main.go:141] libmachine: STDERR: 
	I0721 17:19:50.844095    6956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2 +20000M
	I0721 17:19:50.852250    6956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:50.852264    6956 main.go:141] libmachine: STDERR: 
	I0721 17:19:50.852275    6956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2
	I0721 17:19:50.852280    6956 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:50.852291    6956 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:50.852318    6956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:9f:30:1b:73:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2
	I0721 17:19:50.854093    6956 main.go:141] libmachine: STDOUT: 
	I0721 17:19:50.854111    6956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:50.854129    6956 client.go:171] duration metric: took 326.664791ms to LocalClient.Create
	I0721 17:19:52.643517    6956 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0721 17:19:52.787808    6956 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0721 17:19:52.802684    6956 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0721 17:19:52.802997    6956 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0721 17:19:52.854210    6956 start.go:128] duration metric: took 2.349805s to createHost
	I0721 17:19:52.854237    6956 start.go:83] releasing machines lock for "no-preload-980000", held for 2.349879583s
	W0721 17:19:52.854294    6956 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:52.864634    6956 out.go:177] * Deleting "no-preload-980000" in qemu2 ...
	W0721 17:19:52.888437    6956 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:19:52.888458    6956 start.go:729] Will try again in 5 seconds ...
	I0721 17:19:53.383073    6956 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0721 17:19:53.390075    6956 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0721 17:19:53.463339    6956 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0721 17:19:53.522790    6956 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0721 17:19:53.522820    6956 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 3.018770708s
	I0721 17:19:53.522834    6956 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0721 17:19:54.487476    6956 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0721 17:19:54.487518    6956 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.983464792s
	I0721 17:19:54.487532    6956 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0721 17:19:55.568798    6956 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0721 17:19:55.568824    6956 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 5.064775208s
	I0721 17:19:55.568837    6956 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0721 17:19:55.791290    6956 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0721 17:19:55.791342    6956 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 5.287446209s
	I0721 17:19:55.791361    6956 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0721 17:19:56.015718    6956 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0721 17:19:56.015749    6956 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.511648917s
	I0721 17:19:56.015765    6956 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0721 17:19:57.438136    6956 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0721 17:19:57.438169    6956 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 6.934297791s
	I0721 17:19:57.438216    6956 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0721 17:19:57.888425    6956 start.go:360] acquireMachinesLock for no-preload-980000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:19:57.888574    6956 start.go:364] duration metric: took 117.708µs to acquireMachinesLock for "no-preload-980000"
	I0721 17:19:57.888605    6956 start.go:93] Provisioning new machine with config: &{Name:no-preload-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:19:57.888666    6956 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:19:57.898008    6956 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:19:57.915231    6956 start.go:159] libmachine.API.Create for "no-preload-980000" (driver="qemu2")
	I0721 17:19:57.915259    6956 client.go:168] LocalClient.Create starting
	I0721 17:19:57.915326    6956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:19:57.915367    6956 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:57.915380    6956 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:57.915417    6956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:19:57.915441    6956 main.go:141] libmachine: Decoding PEM data...
	I0721 17:19:57.915449    6956 main.go:141] libmachine: Parsing certificate...
	I0721 17:19:57.915745    6956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:19:58.073072    6956 main.go:141] libmachine: Creating SSH key...
	I0721 17:19:58.121730    6956 main.go:141] libmachine: Creating Disk image...
	I0721 17:19:58.121738    6956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:19:58.121911    6956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2
	I0721 17:19:58.131332    6956 main.go:141] libmachine: STDOUT: 
	I0721 17:19:58.131365    6956 main.go:141] libmachine: STDERR: 
	I0721 17:19:58.131420    6956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2 +20000M
	I0721 17:19:58.139388    6956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:19:58.139405    6956 main.go:141] libmachine: STDERR: 
	I0721 17:19:58.139415    6956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2
	I0721 17:19:58.139420    6956 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:19:58.139435    6956 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:19:58.139468    6956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:34:dd:3f:97:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2
	I0721 17:19:58.141187    6956 main.go:141] libmachine: STDOUT: 
	I0721 17:19:58.141206    6956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:19:58.141221    6956 client.go:171] duration metric: took 225.965917ms to LocalClient.Create
	I0721 17:19:58.880614    6956 cache.go:157] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0721 17:19:58.880671    6956 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.376753625s
	I0721 17:19:58.880693    6956 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0721 17:19:58.880764    6956 cache.go:87] Successfully saved all images to host disk.
	I0721 17:20:00.143290    6956 start.go:128] duration metric: took 2.25467125s to createHost
	I0721 17:20:00.143320    6956 start.go:83] releasing machines lock for "no-preload-980000", held for 2.254801167s
	W0721 17:20:00.143494    6956 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:00.156254    6956 out.go:177] 
	W0721 17:20:00.160472    6956 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:00.160489    6956 out.go:239] * 
	* 
	W0721 17:20:00.162260    6956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:20:00.172344    6956 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-980000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (52.837791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-980000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-980000 create -f testdata/busybox.yaml: exit status 1 (30.098917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-980000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-980000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (30.04875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (28.796959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-980000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-980000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-980000 describe deploy/metrics-server -n kube-system: exit status 1 (26.443917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-980000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-980000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (29.117958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-540000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-540000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.801332834s)

                                                
                                                
-- stdout --
	* [embed-certs-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-540000" primary control-plane node in "embed-certs-540000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-540000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:02.865509    7027 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:02.865654    7027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:02.865657    7027 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:02.865659    7027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:02.865783    7027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:02.866748    7027 out.go:298] Setting JSON to false
	I0721 17:20:02.883168    7027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4765,"bootTime":1721602837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:20:02.883295    7027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:20:02.889675    7027 out.go:177] * [embed-certs-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:20:02.897523    7027 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:20:02.897572    7027 notify.go:220] Checking for updates...
	I0721 17:20:02.905482    7027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:20:02.908504    7027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:20:02.911526    7027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:20:02.914469    7027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:20:02.917511    7027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:20:02.920914    7027 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:02.920982    7027 config.go:182] Loaded profile config "no-preload-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0721 17:20:02.921038    7027 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:20:02.925388    7027 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:20:02.932487    7027 start.go:297] selected driver: qemu2
	I0721 17:20:02.932494    7027 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:20:02.932502    7027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:20:02.935039    7027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:20:02.938513    7027 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:20:02.941601    7027 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:20:02.941641    7027 cni.go:84] Creating CNI manager for ""
	I0721 17:20:02.941649    7027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:20:02.941653    7027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:20:02.941691    7027 start.go:340] cluster config:
	{Name:embed-certs-540000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:02.945614    7027 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:02.947560    7027 out.go:177] * Starting "embed-certs-540000" primary control-plane node in "embed-certs-540000" cluster
	I0721 17:20:02.955477    7027 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:20:02.955495    7027 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:20:02.955508    7027 cache.go:56] Caching tarball of preloaded images
	I0721 17:20:02.955602    7027 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:20:02.955608    7027 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:20:02.955675    7027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/embed-certs-540000/config.json ...
	I0721 17:20:02.955691    7027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/embed-certs-540000/config.json: {Name:mk612e9a4d6e94b7ac78f70fea74dbddbb8c7b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:20:02.956034    7027 start.go:360] acquireMachinesLock for embed-certs-540000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:02.956083    7027 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "embed-certs-540000"
	I0721 17:20:02.956094    7027 start.go:93] Provisioning new machine with config: &{Name:embed-certs-540000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:20:02.956133    7027 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:20:02.963514    7027 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:20:02.982051    7027 start.go:159] libmachine.API.Create for "embed-certs-540000" (driver="qemu2")
	I0721 17:20:02.982077    7027 client.go:168] LocalClient.Create starting
	I0721 17:20:02.982147    7027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:20:02.982178    7027 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:02.982186    7027 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:02.982222    7027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:20:02.982246    7027 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:02.982253    7027 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:02.982716    7027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:20:03.123640    7027 main.go:141] libmachine: Creating SSH key...
	I0721 17:20:03.228948    7027 main.go:141] libmachine: Creating Disk image...
	I0721 17:20:03.228953    7027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:20:03.229110    7027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2
	I0721 17:20:03.238420    7027 main.go:141] libmachine: STDOUT: 
	I0721 17:20:03.238436    7027 main.go:141] libmachine: STDERR: 
	I0721 17:20:03.238483    7027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2 +20000M
	I0721 17:20:03.246382    7027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:20:03.246402    7027 main.go:141] libmachine: STDERR: 
	I0721 17:20:03.246412    7027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2
	I0721 17:20:03.246417    7027 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:20:03.246425    7027 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:03.246451    7027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:e5:b4:48:ba:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2
	I0721 17:20:03.248149    7027 main.go:141] libmachine: STDOUT: 
	I0721 17:20:03.248165    7027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:03.248182    7027 client.go:171] duration metric: took 266.108292ms to LocalClient.Create
	I0721 17:20:05.250327    7027 start.go:128] duration metric: took 2.294231834s to createHost
	I0721 17:20:05.250398    7027 start.go:83] releasing machines lock for "embed-certs-540000", held for 2.294367625s
	W0721 17:20:05.250481    7027 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:05.264974    7027 out.go:177] * Deleting "embed-certs-540000" in qemu2 ...
	W0721 17:20:05.293775    7027 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:05.293802    7027 start.go:729] Will try again in 5 seconds ...
	I0721 17:20:10.295882    7027 start.go:360] acquireMachinesLock for embed-certs-540000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:10.306234    7027 start.go:364] duration metric: took 10.272042ms to acquireMachinesLock for "embed-certs-540000"
	I0721 17:20:10.306306    7027 start.go:93] Provisioning new machine with config: &{Name:embed-certs-540000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:20:10.306587    7027 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:20:10.314174    7027 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:20:10.361454    7027 start.go:159] libmachine.API.Create for "embed-certs-540000" (driver="qemu2")
	I0721 17:20:10.361521    7027 client.go:168] LocalClient.Create starting
	I0721 17:20:10.361634    7027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:20:10.361701    7027 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:10.361716    7027 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:10.361776    7027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:20:10.361821    7027 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:10.361837    7027 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:10.362321    7027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:20:10.525623    7027 main.go:141] libmachine: Creating SSH key...
	I0721 17:20:10.575623    7027 main.go:141] libmachine: Creating Disk image...
	I0721 17:20:10.575634    7027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:20:10.575808    7027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2
	I0721 17:20:10.585503    7027 main.go:141] libmachine: STDOUT: 
	I0721 17:20:10.585530    7027 main.go:141] libmachine: STDERR: 
	I0721 17:20:10.585598    7027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2 +20000M
	I0721 17:20:10.594678    7027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:20:10.594698    7027 main.go:141] libmachine: STDERR: 
	I0721 17:20:10.594711    7027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2
	I0721 17:20:10.594714    7027 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:20:10.594737    7027 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:10.594769    7027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:94:13:08:e7:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2
	I0721 17:20:10.596774    7027 main.go:141] libmachine: STDOUT: 
	I0721 17:20:10.596790    7027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:10.596805    7027 client.go:171] duration metric: took 235.284041ms to LocalClient.Create
	I0721 17:20:12.598964    7027 start.go:128] duration metric: took 2.292408125s to createHost
	I0721 17:20:12.599030    7027 start.go:83] releasing machines lock for "embed-certs-540000", held for 2.2928275s
	W0721 17:20:12.599347    7027 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:12.612886    7027 out.go:177] 
	W0721 17:20:12.616059    7027 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:12.616104    7027 out.go:239] * 
	* 
	W0721 17:20:12.618786    7027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:20:12.625791    7027 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-540000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (51.006875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-980000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-980000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (6.052692s)

                                                
                                                
-- stdout --
	* [no-preload-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-980000" primary control-plane node in "no-preload-980000" cluster
	* Restarting existing qemu2 VM for "no-preload-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:04.320297    7045 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:04.320424    7045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:04.320427    7045 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:04.320430    7045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:04.320566    7045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:04.321607    7045 out.go:298] Setting JSON to false
	I0721 17:20:04.337621    7045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4767,"bootTime":1721602837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:20:04.337702    7045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:20:04.341783    7045 out.go:177] * [no-preload-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:20:04.348732    7045 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:20:04.348790    7045 notify.go:220] Checking for updates...
	I0721 17:20:04.355669    7045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:20:04.358729    7045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:20:04.361729    7045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:20:04.364602    7045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:20:04.367721    7045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:20:04.371093    7045 config.go:182] Loaded profile config "no-preload-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0721 17:20:04.371350    7045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:20:04.375601    7045 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:20:04.382732    7045 start.go:297] selected driver: qemu2
	I0721 17:20:04.382740    7045 start.go:901] validating driver "qemu2" against &{Name:no-preload-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:04.382810    7045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:20:04.385181    7045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:20:04.385205    7045 cni.go:84] Creating CNI manager for ""
	I0721 17:20:04.385211    7045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:20:04.385242    7045 start.go:340] cluster config:
	{Name:no-preload-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-980000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:04.388798    7045 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.406717    7045 out.go:177] * Starting "no-preload-980000" primary control-plane node in "no-preload-980000" cluster
	I0721 17:20:04.410489    7045 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 17:20:04.410561    7045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/no-preload-980000/config.json ...
	I0721 17:20:04.410590    7045 cache.go:107] acquiring lock: {Name:mk2324657e2398ee084d01833edfc8d0e662b64a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.410606    7045 cache.go:107] acquiring lock: {Name:mk23e1a5adc8052546ad5ee221d04394b7657d40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.410635    7045 cache.go:107] acquiring lock: {Name:mkbde0065602fbd4fbb10d69a8d2decc9d903ef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.410678    7045 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0721 17:20:04.410678    7045 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0721 17:20:04.410684    7045 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.708µs
	I0721 17:20:04.410695    7045 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0721 17:20:04.410686    7045 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 102.209µs
	I0721 17:20:04.410704    7045 cache.go:107] acquiring lock: {Name:mk9665d858bd0c1e7fbba239990d7a27712281ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.410710    7045 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0721 17:20:04.410717    7045 cache.go:107] acquiring lock: {Name:mk9bb7ee73fd6505463742802170aef00a12b50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.410731    7045 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0721 17:20:04.410746    7045 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0721 17:20:04.410741    7045 cache.go:107] acquiring lock: {Name:mk9ee8f02d7104c1dcfb77f567f7c814141afa3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.410752    7045 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 48.833µs
	I0721 17:20:04.410756    7045 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0721 17:20:04.410760    7045 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 155.583µs
	I0721 17:20:04.410767    7045 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0721 17:20:04.410773    7045 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0721 17:20:04.410777    7045 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 60.833µs
	I0721 17:20:04.410782    7045 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0721 17:20:04.410797    7045 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0721 17:20:04.410802    7045 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 71.083µs
	I0721 17:20:04.410807    7045 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0721 17:20:04.410807    7045 cache.go:107] acquiring lock: {Name:mka0846d702e11f6e91afa564b60c38b2ec2c668 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.410814    7045 cache.go:107] acquiring lock: {Name:mkecbbc39f47cb3d63d2278731a93d249c8d9718 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:04.410871    7045 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0721 17:20:04.410872    7045 cache.go:115] /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0721 17:20:04.410875    7045 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 134.666µs
	I0721 17:20:04.410885    7045 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0721 17:20:04.410879    7045 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 119.709µs
	I0721 17:20:04.410893    7045 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0721 17:20:04.410897    7045 cache.go:87] Successfully saved all images to host disk.
	I0721 17:20:04.411013    7045 start.go:360] acquireMachinesLock for no-preload-980000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:05.250544    7045 start.go:364] duration metric: took 839.533625ms to acquireMachinesLock for "no-preload-980000"
	I0721 17:20:05.250715    7045 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:20:05.250746    7045 fix.go:54] fixHost starting: 
	I0721 17:20:05.251427    7045 fix.go:112] recreateIfNeeded on no-preload-980000: state=Stopped err=<nil>
	W0721 17:20:05.251475    7045 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:20:05.257203    7045 out.go:177] * Restarting existing qemu2 VM for "no-preload-980000" ...
	I0721 17:20:05.269016    7045 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:05.269224    7045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:34:dd:3f:97:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2
	I0721 17:20:05.279997    7045 main.go:141] libmachine: STDOUT: 
	I0721 17:20:05.280099    7045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:05.280218    7045 fix.go:56] duration metric: took 29.464583ms for fixHost
	I0721 17:20:05.280238    7045 start.go:83] releasing machines lock for "no-preload-980000", held for 29.65525ms
	W0721 17:20:05.280498    7045 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:05.280674    7045 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:05.280694    7045 start.go:729] Will try again in 5 seconds ...
	I0721 17:20:10.282782    7045 start.go:360] acquireMachinesLock for no-preload-980000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:10.283215    7045 start.go:364] duration metric: took 357.792µs to acquireMachinesLock for "no-preload-980000"
	I0721 17:20:10.283334    7045 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:20:10.283356    7045 fix.go:54] fixHost starting: 
	I0721 17:20:10.284098    7045 fix.go:112] recreateIfNeeded on no-preload-980000: state=Stopped err=<nil>
	W0721 17:20:10.284124    7045 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:20:10.291282    7045 out.go:177] * Restarting existing qemu2 VM for "no-preload-980000" ...
	I0721 17:20:10.296252    7045 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:10.296419    7045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:34:dd:3f:97:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/no-preload-980000/disk.qcow2
	I0721 17:20:10.305859    7045 main.go:141] libmachine: STDOUT: 
	I0721 17:20:10.305977    7045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:10.306104    7045 fix.go:56] duration metric: took 22.750833ms for fixHost
	I0721 17:20:10.306127    7045 start.go:83] releasing machines lock for "no-preload-980000", held for 22.890291ms
	W0721 17:20:10.306388    7045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:10.322242    7045 out.go:177] 
	W0721 17:20:10.326283    7045 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:10.326315    7045 out.go:239] * 
	* 
	W0721 17:20:10.327787    7045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:20:10.337183    7045 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-980000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (56.451167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-980000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (35.962375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-980000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-980000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-980000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.368291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-980000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-980000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (32.781375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-980000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (29.007166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-980000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-980000 --alsologtostderr -v=1: exit status 83 (41.462875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-980000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-980000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:10.607147    7066 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:10.607296    7066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:10.607299    7066 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:10.607302    7066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:10.607432    7066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:10.607649    7066 out.go:298] Setting JSON to false
	I0721 17:20:10.607656    7066 mustload.go:65] Loading cluster: no-preload-980000
	I0721 17:20:10.607829    7066 config.go:182] Loaded profile config "no-preload-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0721 17:20:10.612262    7066 out.go:177] * The control-plane node no-preload-980000 host is not running: state=Stopped
	I0721 17:20:10.616560    7066 out.go:177]   To start a cluster, run: "minikube start -p no-preload-980000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-980000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (29.26225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (28.971583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-170000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-170000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.234757875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-170000" primary control-plane node in "default-k8s-diff-port-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:11.023555    7092 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:11.023686    7092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:11.023690    7092 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:11.023692    7092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:11.023820    7092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:11.024958    7092 out.go:298] Setting JSON to false
	I0721 17:20:11.040840    7092 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4774,"bootTime":1721602837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:20:11.040900    7092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:20:11.046184    7092 out.go:177] * [default-k8s-diff-port-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:20:11.053228    7092 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:20:11.053270    7092 notify.go:220] Checking for updates...
	I0721 17:20:11.059218    7092 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:20:11.062272    7092 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:20:11.065265    7092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:20:11.068229    7092 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:20:11.071231    7092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:20:11.074558    7092 config.go:182] Loaded profile config "embed-certs-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:11.074624    7092 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:11.074690    7092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:20:11.079159    7092 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:20:11.086126    7092 start.go:297] selected driver: qemu2
	I0721 17:20:11.086135    7092 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:20:11.086144    7092 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:20:11.088538    7092 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:20:11.091264    7092 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:20:11.094360    7092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:20:11.094401    7092 cni.go:84] Creating CNI manager for ""
	I0721 17:20:11.094410    7092 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:20:11.094414    7092 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:20:11.094449    7092 start.go:340] cluster config:
	{Name:default-k8s-diff-port-170000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:11.098168    7092 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:11.106207    7092 out.go:177] * Starting "default-k8s-diff-port-170000" primary control-plane node in "default-k8s-diff-port-170000" cluster
	I0721 17:20:11.110139    7092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:20:11.110153    7092 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:20:11.110164    7092 cache.go:56] Caching tarball of preloaded images
	I0721 17:20:11.110230    7092 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:20:11.110235    7092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:20:11.110285    7092 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/default-k8s-diff-port-170000/config.json ...
	I0721 17:20:11.110298    7092 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/default-k8s-diff-port-170000/config.json: {Name:mk9e58eecfc2498ad87490f3e4f8b453191489ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:20:11.110629    7092 start.go:360] acquireMachinesLock for default-k8s-diff-port-170000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:12.599178    7092 start.go:364] duration metric: took 1.488557542s to acquireMachinesLock for "default-k8s-diff-port-170000"
	I0721 17:20:12.599339    7092 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:20:12.599556    7092 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:20:12.608811    7092 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:20:12.660133    7092 start.go:159] libmachine.API.Create for "default-k8s-diff-port-170000" (driver="qemu2")
	I0721 17:20:12.660199    7092 client.go:168] LocalClient.Create starting
	I0721 17:20:12.660312    7092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:20:12.660376    7092 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:12.660394    7092 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:12.660466    7092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:20:12.660511    7092 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:12.660527    7092 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:12.661155    7092 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:20:12.810505    7092 main.go:141] libmachine: Creating SSH key...
	I0721 17:20:12.844503    7092 main.go:141] libmachine: Creating Disk image...
	I0721 17:20:12.844510    7092 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:20:12.844697    7092 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2
	I0721 17:20:12.854180    7092 main.go:141] libmachine: STDOUT: 
	I0721 17:20:12.854202    7092 main.go:141] libmachine: STDERR: 
	I0721 17:20:12.854258    7092 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2 +20000M
	I0721 17:20:12.862842    7092 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:20:12.862862    7092 main.go:141] libmachine: STDERR: 
	I0721 17:20:12.862893    7092 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2
	I0721 17:20:12.862899    7092 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:20:12.862911    7092 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:12.862937    7092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:92:94:29:6b:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2
	I0721 17:20:12.864918    7092 main.go:141] libmachine: STDOUT: 
	I0721 17:20:12.864946    7092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:12.864963    7092 client.go:171] duration metric: took 204.764ms to LocalClient.Create
	I0721 17:20:14.867098    7092 start.go:128] duration metric: took 2.267570708s to createHost
	I0721 17:20:14.867161    7092 start.go:83] releasing machines lock for "default-k8s-diff-port-170000", held for 2.268002209s
	W0721 17:20:14.867210    7092 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:14.877467    7092 out.go:177] * Deleting "default-k8s-diff-port-170000" in qemu2 ...
	W0721 17:20:14.903229    7092 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:14.903282    7092 start.go:729] Will try again in 5 seconds ...
	I0721 17:20:19.905338    7092 start.go:360] acquireMachinesLock for default-k8s-diff-port-170000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:19.905716    7092 start.go:364] duration metric: took 303.25µs to acquireMachinesLock for "default-k8s-diff-port-170000"
	I0721 17:20:19.905838    7092 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:20:19.906116    7092 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:20:19.914673    7092 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:20:19.961522    7092 start.go:159] libmachine.API.Create for "default-k8s-diff-port-170000" (driver="qemu2")
	I0721 17:20:19.961572    7092 client.go:168] LocalClient.Create starting
	I0721 17:20:19.961687    7092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:20:19.961753    7092 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:19.961771    7092 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:19.961834    7092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:20:19.961881    7092 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:19.961895    7092 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:19.962495    7092 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:20:20.112815    7092 main.go:141] libmachine: Creating SSH key...
	I0721 17:20:20.154838    7092 main.go:141] libmachine: Creating Disk image...
	I0721 17:20:20.154843    7092 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:20:20.155016    7092 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2
	I0721 17:20:20.164176    7092 main.go:141] libmachine: STDOUT: 
	I0721 17:20:20.164190    7092 main.go:141] libmachine: STDERR: 
	I0721 17:20:20.164232    7092 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2 +20000M
	I0721 17:20:20.172018    7092 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:20:20.172031    7092 main.go:141] libmachine: STDERR: 
	I0721 17:20:20.172042    7092 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2
	I0721 17:20:20.172051    7092 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:20:20.172063    7092 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:20.172090    7092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:22:eb:58:fc:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2
	I0721 17:20:20.173658    7092 main.go:141] libmachine: STDOUT: 
	I0721 17:20:20.173674    7092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:20.173688    7092 client.go:171] duration metric: took 212.116167ms to LocalClient.Create
	I0721 17:20:22.175828    7092 start.go:128] duration metric: took 2.269747334s to createHost
	I0721 17:20:22.175889    7092 start.go:83] releasing machines lock for "default-k8s-diff-port-170000", held for 2.27020975s
	W0721 17:20:22.176204    7092 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:22.188669    7092 out.go:177] 
	W0721 17:20:22.191810    7092 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:22.191834    7092 out.go:239] * 
	* 
	W0721 17:20:22.194326    7092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:20:22.206701    7092 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-170000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (67.15925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-540000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-540000 create -f testdata/busybox.yaml: exit status 1 (30.88375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-540000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-540000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (33.873041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (32.850583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-540000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-540000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-540000 describe deploy/metrics-server -n kube-system: exit status 1 (27.470166ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-540000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-540000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (28.93325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-540000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
E0721 17:20:18.983254    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-540000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.319710208s)

                                                
                                                
-- stdout --
	* [embed-certs-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-540000" primary control-plane node in "embed-certs-540000" cluster
	* Restarting existing qemu2 VM for "embed-certs-540000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-540000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:16.949510    7140 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:16.949638    7140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:16.949642    7140 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:16.949644    7140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:16.949774    7140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:16.950772    7140 out.go:298] Setting JSON to false
	I0721 17:20:16.966669    7140 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4779,"bootTime":1721602837,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:20:16.966737    7140 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:20:16.972123    7140 out.go:177] * [embed-certs-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:20:16.978988    7140 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:20:16.979070    7140 notify.go:220] Checking for updates...
	I0721 17:20:16.985912    7140 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:20:16.988972    7140 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:20:16.992032    7140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:20:16.995005    7140 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:20:16.997969    7140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:20:17.001346    7140 config.go:182] Loaded profile config "embed-certs-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:17.001616    7140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:20:17.004955    7140 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:20:17.012042    7140 start.go:297] selected driver: qemu2
	I0721 17:20:17.012049    7140 start.go:901] validating driver "qemu2" against &{Name:embed-certs-540000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:17.012118    7140 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:20:17.014396    7140 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:20:17.014419    7140 cni.go:84] Creating CNI manager for ""
	I0721 17:20:17.014426    7140 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:20:17.014457    7140 start.go:340] cluster config:
	{Name:embed-certs-540000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-540000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:17.018049    7140 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:17.025949    7140 out.go:177] * Starting "embed-certs-540000" primary control-plane node in "embed-certs-540000" cluster
	I0721 17:20:17.029938    7140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:20:17.029958    7140 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:20:17.029978    7140 cache.go:56] Caching tarball of preloaded images
	I0721 17:20:17.030040    7140 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:20:17.030045    7140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:20:17.030108    7140 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/embed-certs-540000/config.json ...
	I0721 17:20:17.030542    7140 start.go:360] acquireMachinesLock for embed-certs-540000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:17.030581    7140 start.go:364] duration metric: took 32.875µs to acquireMachinesLock for "embed-certs-540000"
	I0721 17:20:17.030590    7140 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:20:17.030596    7140 fix.go:54] fixHost starting: 
	I0721 17:20:17.030723    7140 fix.go:112] recreateIfNeeded on embed-certs-540000: state=Stopped err=<nil>
	W0721 17:20:17.030734    7140 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:20:17.038999    7140 out.go:177] * Restarting existing qemu2 VM for "embed-certs-540000" ...
	I0721 17:20:17.042984    7140 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:17.043021    7140 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:94:13:08:e7:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2
	I0721 17:20:17.045067    7140 main.go:141] libmachine: STDOUT: 
	I0721 17:20:17.045086    7140 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:17.045116    7140 fix.go:56] duration metric: took 14.519916ms for fixHost
	I0721 17:20:17.045120    7140 start.go:83] releasing machines lock for "embed-certs-540000", held for 14.535375ms
	W0721 17:20:17.045127    7140 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:17.045155    7140 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:17.045159    7140 start.go:729] Will try again in 5 seconds ...
	I0721 17:20:22.047232    7140 start.go:360] acquireMachinesLock for embed-certs-540000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:22.176010    7140 start.go:364] duration metric: took 128.66325ms to acquireMachinesLock for "embed-certs-540000"
	I0721 17:20:22.176145    7140 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:20:22.176164    7140 fix.go:54] fixHost starting: 
	I0721 17:20:22.176818    7140 fix.go:112] recreateIfNeeded on embed-certs-540000: state=Stopped err=<nil>
	W0721 17:20:22.176853    7140 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:20:22.188663    7140 out.go:177] * Restarting existing qemu2 VM for "embed-certs-540000" ...
	I0721 17:20:22.194712    7140 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:22.195194    7140 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:94:13:08:e7:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/embed-certs-540000/disk.qcow2
	I0721 17:20:22.204113    7140 main.go:141] libmachine: STDOUT: 
	I0721 17:20:22.204164    7140 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:22.204222    7140 fix.go:56] duration metric: took 28.0615ms for fixHost
	I0721 17:20:22.204237    7140 start.go:83] releasing machines lock for "embed-certs-540000", held for 28.203583ms
	W0721 17:20:22.204476    7140 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-540000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-540000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:22.217688    7140 out.go:177] 
	W0721 17:20:22.221794    7140 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:22.221827    7140 out.go:239] * 
	* 
	W0721 17:20:22.224360    7140 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:20:22.232335    7140 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-540000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (57.592208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-170000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-170000 create -f testdata/busybox.yaml: exit status 1 (32.5705ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-170000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-170000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (30.085792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (32.98175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-540000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (34.27775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-540000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-540000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-540000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.713167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-540000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-540000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (30.957208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-170000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-170000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-170000 describe deploy/metrics-server -n kube-system: exit status 1 (28.428959ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-170000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-170000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (35.61ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-540000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (28.908958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-540000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-540000 --alsologtostderr -v=1: exit status 83 (49.333791ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-540000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-540000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:22.512501    7173 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:22.512671    7173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:22.512675    7173 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:22.512677    7173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:22.512815    7173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:22.513032    7173 out.go:298] Setting JSON to false
	I0721 17:20:22.513039    7173 mustload.go:65] Loading cluster: embed-certs-540000
	I0721 17:20:22.513226    7173 config.go:182] Loaded profile config "embed-certs-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:22.516218    7173 out.go:177] * The control-plane node embed-certs-540000 host is not running: state=Stopped
	I0721 17:20:22.526237    7173 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-540000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-540000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (36.311209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (28.5735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-208000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-208000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.748635083s)

                                                
                                                
-- stdout --
	* [newest-cni-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-208000" primary control-plane node in "newest-cni-208000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-208000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:22.834365    7196 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:22.834533    7196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:22.834536    7196 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:22.834538    7196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:22.834669    7196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:22.835720    7196 out.go:298] Setting JSON to false
	I0721 17:20:22.851712    7196 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4785,"bootTime":1721602837,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:20:22.851776    7196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:20:22.857234    7196 out.go:177] * [newest-cni-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:20:22.864266    7196 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:20:22.864308    7196 notify.go:220] Checking for updates...
	I0721 17:20:22.872215    7196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:20:22.875232    7196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:20:22.878245    7196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:20:22.881143    7196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:20:22.884213    7196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:20:22.892476    7196 config.go:182] Loaded profile config "default-k8s-diff-port-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:22.892541    7196 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:22.892600    7196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:20:22.897094    7196 out.go:177] * Using the qemu2 driver based on user configuration
	I0721 17:20:22.904205    7196 start.go:297] selected driver: qemu2
	I0721 17:20:22.904212    7196 start.go:901] validating driver "qemu2" against <nil>
	I0721 17:20:22.904218    7196 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:20:22.906568    7196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0721 17:20:22.906608    7196 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0721 17:20:22.915214    7196 out.go:177] * Automatically selected the socket_vmnet network
	I0721 17:20:22.918365    7196 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0721 17:20:22.918388    7196 cni.go:84] Creating CNI manager for ""
	I0721 17:20:22.918399    7196 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:20:22.918407    7196 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:20:22.918445    7196 start.go:340] cluster config:
	{Name:newest-cni-208000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:22.922415    7196 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:22.930225    7196 out.go:177] * Starting "newest-cni-208000" primary control-plane node in "newest-cni-208000" cluster
	I0721 17:20:22.934014    7196 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 17:20:22.934031    7196 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0721 17:20:22.934043    7196 cache.go:56] Caching tarball of preloaded images
	I0721 17:20:22.934103    7196 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:20:22.934109    7196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0721 17:20:22.934190    7196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/newest-cni-208000/config.json ...
	I0721 17:20:22.934203    7196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/newest-cni-208000/config.json: {Name:mka87e28925186845dce2be6bfc342c31821b33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 17:20:22.934533    7196 start.go:360] acquireMachinesLock for newest-cni-208000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:22.934570    7196 start.go:364] duration metric: took 30.541µs to acquireMachinesLock for "newest-cni-208000"
	I0721 17:20:22.934581    7196 start.go:93] Provisioning new machine with config: &{Name:newest-cni-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:20:22.934611    7196 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:20:22.939196    7196 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:20:22.957818    7196 start.go:159] libmachine.API.Create for "newest-cni-208000" (driver="qemu2")
	I0721 17:20:22.957843    7196 client.go:168] LocalClient.Create starting
	I0721 17:20:22.957910    7196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:20:22.957945    7196 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:22.957956    7196 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:22.957994    7196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:20:22.958023    7196 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:22.958029    7196 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:22.958512    7196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:20:23.099450    7196 main.go:141] libmachine: Creating SSH key...
	I0721 17:20:23.158229    7196 main.go:141] libmachine: Creating Disk image...
	I0721 17:20:23.158237    7196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:20:23.158405    7196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2
	I0721 17:20:23.167501    7196 main.go:141] libmachine: STDOUT: 
	I0721 17:20:23.167520    7196 main.go:141] libmachine: STDERR: 
	I0721 17:20:23.167580    7196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2 +20000M
	I0721 17:20:23.175442    7196 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:20:23.175464    7196 main.go:141] libmachine: STDERR: 
	I0721 17:20:23.175479    7196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2
	I0721 17:20:23.175484    7196 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:20:23.175493    7196 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:23.175521    7196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:1a:2e:4f:e0:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2
	I0721 17:20:23.177237    7196 main.go:141] libmachine: STDOUT: 
	I0721 17:20:23.177254    7196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:23.177271    7196 client.go:171] duration metric: took 219.430625ms to LocalClient.Create
	I0721 17:20:25.179526    7196 start.go:128] duration metric: took 2.244952583s to createHost
	I0721 17:20:25.179605    7196 start.go:83] releasing machines lock for "newest-cni-208000", held for 2.24508675s
	W0721 17:20:25.179667    7196 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:25.194021    7196 out.go:177] * Deleting "newest-cni-208000" in qemu2 ...
	W0721 17:20:25.222399    7196 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:25.222426    7196 start.go:729] Will try again in 5 seconds ...
	I0721 17:20:30.222907    7196 start.go:360] acquireMachinesLock for newest-cni-208000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:30.233262    7196 start.go:364] duration metric: took 10.277042ms to acquireMachinesLock for "newest-cni-208000"
	I0721 17:20:30.233328    7196 start.go:93] Provisioning new machine with config: &{Name:newest-cni-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:20:30.233605    7196 start.go:125] createHost starting for "" (driver="qemu2")
	I0721 17:20:30.242366    7196 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 17:20:30.292581    7196 start.go:159] libmachine.API.Create for "newest-cni-208000" (driver="qemu2")
	I0721 17:20:30.292630    7196 client.go:168] LocalClient.Create starting
	I0721 17:20:30.292723    7196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/ca.pem
	I0721 17:20:30.292786    7196 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:30.292799    7196 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:30.292858    7196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1409/.minikube/certs/cert.pem
	I0721 17:20:30.292911    7196 main.go:141] libmachine: Decoding PEM data...
	I0721 17:20:30.292922    7196 main.go:141] libmachine: Parsing certificate...
	I0721 17:20:30.293429    7196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0721 17:20:30.443958    7196 main.go:141] libmachine: Creating SSH key...
	I0721 17:20:30.496876    7196 main.go:141] libmachine: Creating Disk image...
	I0721 17:20:30.496885    7196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0721 17:20:30.497094    7196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2
	I0721 17:20:30.506808    7196 main.go:141] libmachine: STDOUT: 
	I0721 17:20:30.506834    7196 main.go:141] libmachine: STDERR: 
	I0721 17:20:30.506895    7196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2 +20000M
	I0721 17:20:30.515810    7196 main.go:141] libmachine: STDOUT: Image resized.
	
	I0721 17:20:30.515830    7196 main.go:141] libmachine: STDERR: 
	I0721 17:20:30.515841    7196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2
	I0721 17:20:30.515844    7196 main.go:141] libmachine: Starting QEMU VM...
	I0721 17:20:30.515855    7196 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:30.515882    7196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:a5:bb:d5:c7:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2
	I0721 17:20:30.517869    7196 main.go:141] libmachine: STDOUT: 
	I0721 17:20:30.517886    7196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:30.517898    7196 client.go:171] duration metric: took 225.269459ms to LocalClient.Create
	I0721 17:20:32.520052    7196 start.go:128] duration metric: took 2.286461708s to createHost
	I0721 17:20:32.520137    7196 start.go:83] releasing machines lock for "newest-cni-208000", held for 2.286907875s
	W0721 17:20:32.520594    7196 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:32.529393    7196 out.go:177] 
	W0721 17:20:32.532408    7196 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:32.532451    7196 out.go:239] * 
	* 
	W0721 17:20:32.534802    7196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:20:32.542353    7196 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-208000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000: exit status 7 (66.494083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-170000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-170000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.866654708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-170000" primary control-plane node in "default-k8s-diff-port-170000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:24.434616    7216 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:24.434747    7216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:24.434750    7216 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:24.434753    7216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:24.434895    7216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:24.435957    7216 out.go:298] Setting JSON to false
	I0721 17:20:24.451967    7216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4787,"bootTime":1721602837,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:20:24.452029    7216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:20:24.456714    7216 out.go:177] * [default-k8s-diff-port-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:20:24.463667    7216 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:20:24.463730    7216 notify.go:220] Checking for updates...
	I0721 17:20:24.469004    7216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:20:24.471683    7216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:20:24.474706    7216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:20:24.477766    7216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:20:24.480723    7216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:20:24.483955    7216 config.go:182] Loaded profile config "default-k8s-diff-port-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:24.484249    7216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:20:24.488709    7216 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:20:24.495680    7216 start.go:297] selected driver: qemu2
	I0721 17:20:24.495688    7216 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:24.495760    7216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:20:24.498095    7216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:20:24.498134    7216 cni.go:84] Creating CNI manager for ""
	I0721 17:20:24.498142    7216 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:20:24.498166    7216 start.go:340] cluster config:
	{Name:default-k8s-diff-port-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-170000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:24.501613    7216 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:24.508638    7216 out.go:177] * Starting "default-k8s-diff-port-170000" primary control-plane node in "default-k8s-diff-port-170000" cluster
	I0721 17:20:24.512764    7216 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:20:24.512780    7216 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 17:20:24.512791    7216 cache.go:56] Caching tarball of preloaded images
	I0721 17:20:24.512859    7216 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:20:24.512865    7216 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:20:24.512930    7216 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/default-k8s-diff-port-170000/config.json ...
	I0721 17:20:24.513383    7216 start.go:360] acquireMachinesLock for default-k8s-diff-port-170000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:25.179771    7216 start.go:364] duration metric: took 666.335458ms to acquireMachinesLock for "default-k8s-diff-port-170000"
	I0721 17:20:25.179872    7216 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:20:25.179919    7216 fix.go:54] fixHost starting: 
	I0721 17:20:25.180576    7216 fix.go:112] recreateIfNeeded on default-k8s-diff-port-170000: state=Stopped err=<nil>
	W0721 17:20:25.180624    7216 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:20:25.186113    7216 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-170000" ...
	I0721 17:20:25.198143    7216 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:25.198329    7216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:22:eb:58:fc:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2
	I0721 17:20:25.208922    7216 main.go:141] libmachine: STDOUT: 
	I0721 17:20:25.209009    7216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:25.209138    7216 fix.go:56] duration metric: took 29.213416ms for fixHost
	I0721 17:20:25.209161    7216 start.go:83] releasing machines lock for "default-k8s-diff-port-170000", held for 29.359917ms
	W0721 17:20:25.209187    7216 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:25.209334    7216 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:25.209357    7216 start.go:729] Will try again in 5 seconds ...
	I0721 17:20:30.211412    7216 start.go:360] acquireMachinesLock for default-k8s-diff-port-170000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:30.211868    7216 start.go:364] duration metric: took 321.958µs to acquireMachinesLock for "default-k8s-diff-port-170000"
	I0721 17:20:30.211989    7216 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:20:30.212065    7216 fix.go:54] fixHost starting: 
	I0721 17:20:30.212824    7216 fix.go:112] recreateIfNeeded on default-k8s-diff-port-170000: state=Stopped err=<nil>
	W0721 17:20:30.212852    7216 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:20:30.218371    7216 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-170000" ...
	I0721 17:20:30.222345    7216 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:30.222623    7216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:22:eb:58:fc:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/default-k8s-diff-port-170000/disk.qcow2
	I0721 17:20:30.232981    7216 main.go:141] libmachine: STDOUT: 
	I0721 17:20:30.233053    7216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:30.233160    7216 fix.go:56] duration metric: took 21.152667ms for fixHost
	I0721 17:20:30.233183    7216 start.go:83] releasing machines lock for "default-k8s-diff-port-170000", held for 21.291709ms
	W0721 17:20:30.233371    7216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-170000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-170000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:30.249373    7216 out.go:177] 
	W0721 17:20:30.253330    7216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:30.253359    7216 out.go:239] * 
	* 
	W0721 17:20:30.255530    7216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:20:30.264236    7216 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-170000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (50.177208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-170000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (33.347041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-170000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-170000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-170000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (31.478958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-170000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-170000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (34.076916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-170000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (30.21275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-170000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-170000 --alsologtostderr -v=1: exit status 83 (43.522042ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-170000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-170000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:30.528187    7237 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:30.528349    7237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:30.528353    7237 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:30.528355    7237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:30.528495    7237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:30.528711    7237 out.go:298] Setting JSON to false
	I0721 17:20:30.528719    7237 mustload.go:65] Loading cluster: default-k8s-diff-port-170000
	I0721 17:20:30.528895    7237 config.go:182] Loaded profile config "default-k8s-diff-port-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:20:30.533336    7237 out.go:177] * The control-plane node default-k8s-diff-port-170000 host is not running: state=Stopped
	I0721 17:20:30.537215    7237 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-170000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-170000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (29.335333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (29.351917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-208000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-208000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.183651834s)

                                                
                                                
-- stdout --
	* [newest-cni-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-208000" primary control-plane node in "newest-cni-208000" cluster
	* Restarting existing qemu2 VM for "newest-cni-208000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-208000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:36.686477    7288 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:36.686610    7288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:36.686613    7288 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:36.686616    7288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:36.686738    7288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:36.687714    7288 out.go:298] Setting JSON to false
	I0721 17:20:36.704258    7288 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4799,"bootTime":1721602837,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 17:20:36.704331    7288 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:20:36.709347    7288 out.go:177] * [newest-cni-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 17:20:36.717293    7288 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:20:36.717349    7288 notify.go:220] Checking for updates...
	I0721 17:20:36.725232    7288 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 17:20:36.728293    7288 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 17:20:36.731164    7288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:20:36.734223    7288 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 17:20:36.737306    7288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:20:36.739012    7288 config.go:182] Loaded profile config "newest-cni-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0721 17:20:36.739267    7288 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:20:36.743226    7288 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 17:20:36.750135    7288 start.go:297] selected driver: qemu2
	I0721 17:20:36.750141    7288 start.go:901] validating driver "qemu2" against &{Name:newest-cni-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:36.750207    7288 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:20:36.752700    7288 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0721 17:20:36.752741    7288 cni.go:84] Creating CNI manager for ""
	I0721 17:20:36.752749    7288 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:20:36.752776    7288 start.go:340] cluster config:
	{Name:newest-cni-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-208000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:20:36.756420    7288 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:20:36.764281    7288 out.go:177] * Starting "newest-cni-208000" primary control-plane node in "newest-cni-208000" cluster
	I0721 17:20:36.768244    7288 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 17:20:36.768261    7288 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0721 17:20:36.768274    7288 cache.go:56] Caching tarball of preloaded images
	I0721 17:20:36.768332    7288 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0721 17:20:36.768341    7288 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0721 17:20:36.768415    7288 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/newest-cni-208000/config.json ...
	I0721 17:20:36.768856    7288 start.go:360] acquireMachinesLock for newest-cni-208000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:36.768895    7288 start.go:364] duration metric: took 32.708µs to acquireMachinesLock for "newest-cni-208000"
	I0721 17:20:36.768903    7288 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:20:36.768911    7288 fix.go:54] fixHost starting: 
	I0721 17:20:36.769036    7288 fix.go:112] recreateIfNeeded on newest-cni-208000: state=Stopped err=<nil>
	W0721 17:20:36.769044    7288 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:20:36.772298    7288 out.go:177] * Restarting existing qemu2 VM for "newest-cni-208000" ...
	I0721 17:20:36.780244    7288 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:36.780288    7288 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:a5:bb:d5:c7:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2
	I0721 17:20:36.782445    7288 main.go:141] libmachine: STDOUT: 
	I0721 17:20:36.782464    7288 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:36.782493    7288 fix.go:56] duration metric: took 13.582875ms for fixHost
	I0721 17:20:36.782499    7288 start.go:83] releasing machines lock for "newest-cni-208000", held for 13.599291ms
	W0721 17:20:36.782505    7288 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:36.782535    7288 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:36.782540    7288 start.go:729] Will try again in 5 seconds ...
	I0721 17:20:41.784630    7288 start.go:360] acquireMachinesLock for newest-cni-208000: {Name:mk80df4cd8036296a482caf90ad0ddb93dea84ad Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 17:20:41.785025    7288 start.go:364] duration metric: took 302.292µs to acquireMachinesLock for "newest-cni-208000"
	I0721 17:20:41.785192    7288 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:20:41.785211    7288 fix.go:54] fixHost starting: 
	I0721 17:20:41.785877    7288 fix.go:112] recreateIfNeeded on newest-cni-208000: state=Stopped err=<nil>
	W0721 17:20:41.785902    7288 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 17:20:41.794364    7288 out.go:177] * Restarting existing qemu2 VM for "newest-cni-208000" ...
	I0721 17:20:41.798430    7288 qemu.go:418] Using hvf for hardware acceleration
	I0721 17:20:41.798709    7288 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:a5:bb:d5:c7:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1409/.minikube/machines/newest-cni-208000/disk.qcow2
	I0721 17:20:41.807541    7288 main.go:141] libmachine: STDOUT: 
	I0721 17:20:41.807612    7288 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0721 17:20:41.807680    7288 fix.go:56] duration metric: took 22.469583ms for fixHost
	I0721 17:20:41.807697    7288 start.go:83] releasing machines lock for "newest-cni-208000", held for 22.648875ms
	W0721 17:20:41.807847    7288 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-208000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-208000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0721 17:20:41.815202    7288 out.go:177] 
	W0721 17:20:41.819310    7288 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0721 17:20:41.819349    7288 out.go:239] * 
	* 
	W0721 17:20:41.821852    7288 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:20:41.828260    7288 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-208000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000: exit status 7 (68.793834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-208000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000: exit status 7 (29.953375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-208000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-208000 --alsologtostderr -v=1: exit status 83 (41.372583ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-208000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:20:42.010714    7302 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:20:42.010868    7302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:42.010871    7302 out.go:304] Setting ErrFile to fd 2...
	I0721 17:20:42.010873    7302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:20:42.011000    7302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 17:20:42.011232    7302 out.go:298] Setting JSON to false
	I0721 17:20:42.011238    7302 mustload.go:65] Loading cluster: newest-cni-208000
	I0721 17:20:42.011425    7302 config.go:182] Loaded profile config "newest-cni-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0721 17:20:42.015956    7302 out.go:177] * The control-plane node newest-cni-208000 host is not running: state=Stopped
	I0721 17:20:42.019722    7302 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-208000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-208000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000: exit status 7 (29.927042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-208000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000: exit status 7 (29.4685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 7.94
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.38
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 230.14
38 TestAddons/parallel/Registry 17.95
39 TestAddons/parallel/Ingress 17.57
40 TestAddons/parallel/InspektorGadget 10.25
41 TestAddons/parallel/MetricsServer 5.25
44 TestAddons/parallel/CSI 34.56
45 TestAddons/parallel/Headlamp 13.43
46 TestAddons/parallel/CloudSpanner 5.17
47 TestAddons/parallel/LocalPath 52.83
48 TestAddons/parallel/NvidiaDevicePlugin 5.16
49 TestAddons/parallel/Yakd 5
50 TestAddons/parallel/Volcano 37.83
53 TestAddons/serial/GCPAuth/Namespaces 0.07
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.48
65 TestErrorSpam/setup 34.62
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.25
68 TestErrorSpam/pause 0.65
69 TestErrorSpam/unpause 0.61
70 TestErrorSpam/stop 64.3
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 90.66
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 34.2
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 9.88
82 TestFunctional/serial/CacheCmd/cache/add_local 1.08
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.36
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
90 TestFunctional/serial/ExtraConfig 38.22
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.58
94 TestFunctional/serial/InvalidService 3.91
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 6.42
98 TestFunctional/parallel/DryRun 0.23
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.23
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.5
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.43
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.37
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
120 TestFunctional/parallel/License 0.22
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.17
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.06
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 5.85
128 TestFunctional/parallel/ImageCommands/Setup 1.75
129 TestFunctional/parallel/DockerEnv/bash 0.31
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 13.08
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.44
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.34
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.16
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.1
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.11
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
160 TestFunctional/parallel/MountCmd/any-port 9.04
161 TestFunctional/parallel/MountCmd/specific-port 1.62
162 TestFunctional/parallel/MountCmd/VerifyCleanup 0.65
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 368.58
170 TestMultiControlPlane/serial/DeployApp 9.84
171 TestMultiControlPlane/serial/PingHostFromPods 0.73
172 TestMultiControlPlane/serial/AddWorkerNode 89.01
173 TestMultiControlPlane/serial/NodeLabels 0.13
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.74
175 TestMultiControlPlane/serial/CopyFile 4.39
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 77.99
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 3.8
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 0.95
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.23
286 TestNoKubernetes/serial/Stop 4.08
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
298 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
303 TestStartStop/group/old-k8s-version/serial/Stop 3.91
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/no-preload/serial/Stop 3.73
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
327 TestStartStop/group/embed-certs/serial/Stop 3.9
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.77
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
347 TestStartStop/group/newest-cni/serial/Stop 3.85
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-504000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-504000: exit status 85 (92.768625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-504000 | jenkins | v1.33.1 | 21 Jul 24 16:23 PDT |          |
	|         | -p download-only-504000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 16:23:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 16:23:49.499317    1915 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:23:49.499441    1915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:23:49.499444    1915 out.go:304] Setting ErrFile to fd 2...
	I0721 16:23:49.499447    1915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:23:49.499580    1915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	W0721 16:23:49.499659    1915 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19312-1409/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19312-1409/.minikube/config/config.json: no such file or directory
	I0721 16:23:49.500846    1915 out.go:298] Setting JSON to true
	I0721 16:23:49.518382    1915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1392,"bootTime":1721602837,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 16:23:49.518449    1915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:23:49.521786    1915 out.go:97] [download-only-504000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 16:23:49.521938    1915 notify.go:220] Checking for updates...
	W0721 16:23:49.521972    1915 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball: no such file or directory
	I0721 16:23:49.524742    1915 out.go:169] MINIKUBE_LOCATION=19312
	I0721 16:23:49.527758    1915 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 16:23:49.531688    1915 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 16:23:49.534744    1915 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:23:49.537774    1915 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	W0721 16:23:49.541757    1915 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 16:23:49.542032    1915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:23:49.545780    1915 out.go:97] Using the qemu2 driver based on user configuration
	I0721 16:23:49.545797    1915 start.go:297] selected driver: qemu2
	I0721 16:23:49.545810    1915 start.go:901] validating driver "qemu2" against <nil>
	I0721 16:23:49.545872    1915 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 16:23:49.548742    1915 out.go:169] Automatically selected the socket_vmnet network
	I0721 16:23:49.554455    1915 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0721 16:23:49.554552    1915 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 16:23:49.554574    1915 cni.go:84] Creating CNI manager for ""
	I0721 16:23:49.554590    1915 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0721 16:23:49.554650    1915 start.go:340] cluster config:
	{Name:download-only-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:23:49.559722    1915 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 16:23:49.563784    1915 out.go:97] Downloading VM boot image ...
	I0721 16:23:49.563798    1915 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0721 16:23:54.963159    1915 out.go:97] Starting "download-only-504000" primary control-plane node in "download-only-504000" cluster
	I0721 16:23:54.963201    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:23:55.019895    1915 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0721 16:23:55.019919    1915 cache.go:56] Caching tarball of preloaded images
	I0721 16:23:55.020062    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:23:55.024201    1915 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0721 16:23:55.024208    1915 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:23:55.098900    1915 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0721 16:24:02.034198    1915 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:24:02.034379    1915 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:24:02.729850    1915 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0721 16:24:02.730047    1915 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/download-only-504000/config.json ...
	I0721 16:24:02.730078    1915 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/download-only-504000/config.json: {Name:mka7443ca39924a8a20a238c279262f6c536e549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 16:24:02.730310    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:24:02.730501    1915 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0721 16:24:03.220164    1915 out.go:169] 
	W0721 16:24:03.224300    1915 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1409/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60 0x106cc5a60] Decompressors:map[bz2:0x14000702ba0 gz:0x14000702ba8 tar:0x14000702ac0 tar.bz2:0x14000702ad0 tar.gz:0x14000702b30 tar.xz:0x14000702b40 tar.zst:0x14000702b80 tbz2:0x14000702ad0 tgz:0x14000702b30 txz:0x14000702b40 tzst:0x14000702b80 xz:0x14000702be0 zip:0x14000702c10 zst:0x14000702be8] Getters:map[file:0x140014185a0 http:0x140007e01e0 https:0x140007e0230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0721 16:24:03.224326    1915 out_reason.go:110] 
	W0721 16:24:03.232271    1915 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 16:24:03.236199    1915 out.go:169] 
	
	
	* The control-plane node download-only-504000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-504000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-504000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-035000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-035000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (7.939582791s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-035000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-035000: exit status 85 (76.6395ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-504000 | jenkins | v1.33.1 | 21 Jul 24 16:23 PDT |                     |
	|         | -p download-only-504000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| delete  | -p download-only-504000        | download-only-504000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| start   | -o=json --download-only        | download-only-035000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |                     |
	|         | -p download-only-035000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 16:24:03
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 16:24:03.639042    1941 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:24:03.639192    1941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:03.639195    1941 out.go:304] Setting ErrFile to fd 2...
	I0721 16:24:03.639198    1941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:03.639315    1941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:24:03.640351    1941 out.go:298] Setting JSON to true
	I0721 16:24:03.656276    1941 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1406,"bootTime":1721602837,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 16:24:03.656346    1941 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:24:03.660127    1941 out.go:97] [download-only-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 16:24:03.660235    1941 notify.go:220] Checking for updates...
	I0721 16:24:03.664148    1941 out.go:169] MINIKUBE_LOCATION=19312
	I0721 16:24:03.667206    1941 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 16:24:03.671159    1941 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 16:24:03.674209    1941 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:24:03.677231    1941 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	W0721 16:24:03.683103    1941 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 16:24:03.683229    1941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:24:03.686194    1941 out.go:97] Using the qemu2 driver based on user configuration
	I0721 16:24:03.686205    1941 start.go:297] selected driver: qemu2
	I0721 16:24:03.686210    1941 start.go:901] validating driver "qemu2" against <nil>
	I0721 16:24:03.686266    1941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 16:24:03.689118    1941 out.go:169] Automatically selected the socket_vmnet network
	I0721 16:24:03.694266    1941 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0721 16:24:03.694357    1941 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 16:24:03.694413    1941 cni.go:84] Creating CNI manager for ""
	I0721 16:24:03.694421    1941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 16:24:03.694429    1941 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 16:24:03.694465    1941 start.go:340] cluster config:
	{Name:download-only-035000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:24:03.697909    1941 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 16:24:03.701153    1941 out.go:97] Starting "download-only-035000" primary control-plane node in "download-only-035000" cluster
	I0721 16:24:03.701160    1941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 16:24:03.760167    1941 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0721 16:24:03.760197    1941 cache.go:56] Caching tarball of preloaded images
	I0721 16:24:03.760368    1941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 16:24:03.764437    1941 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0721 16:24:03.764445    1941 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:24:03.850018    1941 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-035000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-035000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-035000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-503000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-503000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (6.381287417s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-503000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-503000: exit status 85 (80.953791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-504000 | jenkins | v1.33.1 | 21 Jul 24 16:23 PDT |                     |
	|         | -p download-only-504000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| delete  | -p download-only-504000             | download-only-504000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| start   | -o=json --download-only             | download-only-035000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |                     |
	|         | -p download-only-035000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| delete  | -p download-only-035000             | download-only-035000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| start   | -o=json --download-only             | download-only-503000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |                     |
	|         | -p download-only-503000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 16:24:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 16:24:11.859936    1963 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:24:11.860073    1963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:11.860076    1963 out.go:304] Setting ErrFile to fd 2...
	I0721 16:24:11.860079    1963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:11.860205    1963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:24:11.861317    1963 out.go:298] Setting JSON to true
	I0721 16:24:11.877324    1963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1414,"bootTime":1721602837,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 16:24:11.877395    1963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:24:11.880662    1963 out.go:97] [download-only-503000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 16:24:11.880738    1963 notify.go:220] Checking for updates...
	I0721 16:24:11.884759    1963 out.go:169] MINIKUBE_LOCATION=19312
	I0721 16:24:11.887717    1963 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 16:24:11.891706    1963 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 16:24:11.894741    1963 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:24:11.897629    1963 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	W0721 16:24:11.903674    1963 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 16:24:11.903822    1963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:24:11.905224    1963 out.go:97] Using the qemu2 driver based on user configuration
	I0721 16:24:11.905232    1963 start.go:297] selected driver: qemu2
	I0721 16:24:11.905238    1963 start.go:901] validating driver "qemu2" against <nil>
	I0721 16:24:11.905290    1963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 16:24:11.908745    1963 out.go:169] Automatically selected the socket_vmnet network
	I0721 16:24:11.913812    1963 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0721 16:24:11.913915    1963 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 16:24:11.913947    1963 cni.go:84] Creating CNI manager for ""
	I0721 16:24:11.913956    1963 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 16:24:11.913966    1963 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 16:24:11.914008    1963 start.go:340] cluster config:
	{Name:download-only-503000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:24:11.917627    1963 iso.go:125] acquiring lock: {Name:mk9e3ea345453afec1b5d22edd5414758f3bb68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 16:24:11.920731    1963 out.go:97] Starting "download-only-503000" primary control-plane node in "download-only-503000" cluster
	I0721 16:24:11.920740    1963 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 16:24:11.975924    1963 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0721 16:24:11.975935    1963 cache.go:56] Caching tarball of preloaded images
	I0721 16:24:11.976129    1963 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 16:24:11.980309    1963 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0721 16:24:11.980316    1963 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:24:12.056894    1963 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0721 16:24:15.948693    1963 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:24:15.948849    1963 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0721 16:24:16.467927    1963 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0721 16:24:16.468135    1963 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/download-only-503000/config.json ...
	I0721 16:24:16.468156    1963 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/download-only-503000/config.json: {Name:mka93c736bab4175ceee1119a9d7f1a35bf8a253 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 16:24:16.468387    1963 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 16:24:16.468502    1963 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1409/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-503000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-503000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-503000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-215000 --alsologtostderr --binary-mirror http://127.0.0.1:49326 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-215000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-215000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-480000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-480000: exit status 85 (54.638ms)

                                                
                                                
-- stdout --
	* Profile "addons-480000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-480000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-480000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-480000: exit status 85 (58.38925ms)

                                                
                                                
-- stdout --
	* Profile "addons-480000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-480000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (230.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-480000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-480000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m50.140989833s)
--- PASS: TestAddons/Setup (230.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.151042ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-879dm" [457e066f-cdf9-4003-b36f-4a5eab5c6dcc] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004253375s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b65nv" [682f147f-3524-4305-a1d3-f22cf19649ca] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007145s
addons_test.go:342: (dbg) Run:  kubectl --context addons-480000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-480000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-480000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.621452042s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 ip
2024/07/21 16:28:27 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-480000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-480000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-480000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d5e63c66-fa04-4973-8da4-a3195e3f0ff1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d5e63c66-fa04-4973-8da4-a3195e3f0ff1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004295208s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-480000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-480000 addons disable ingress --alsologtostderr -v=1: (7.20641025s)
--- PASS: TestAddons/parallel/Ingress (17.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fsrbx" [cf8b865b-1432-4bcf-b811-6ad9e2466925] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004166542s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-480000
addons_test.go:843: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-480000: (5.242359125s)
--- PASS: TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.384791ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-s4x87" [5740aba7-be17-4203-b010-48fb9267e0e4] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003915334s
addons_test.go:417: (dbg) Run:  kubectl --context addons-480000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 7.643291ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-480000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-480000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8161a86b-38d2-463f-ae5f-3c47918ad7ca] Pending
helpers_test.go:344: "task-pv-pod" [8161a86b-38d2-463f-ae5f-3c47918ad7ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8161a86b-38d2-463f-ae5f-3c47918ad7ca] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00382575s
addons_test.go:586: (dbg) Run:  kubectl --context addons-480000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-480000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-480000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-480000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-480000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-480000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-480000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a2ecf792-d6ab-4891-bf6d-c3c0298ecd9f] Pending
helpers_test.go:344: "task-pv-pod-restore" [a2ecf792-d6ab-4891-bf6d-c3c0298ecd9f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a2ecf792-d6ab-4891-bf6d-c3c0298ecd9f] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003929s
addons_test.go:628: (dbg) Run:  kubectl --context addons-480000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-480000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-480000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-arm64 -p addons-480000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.097799875s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (34.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-480000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-plbc9" [c8e49435-36f6-4373-ad5a-78cccd8001af] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-plbc9" [c8e49435-36f6-4373-ad5a-78cccd8001af] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-plbc9" [c8e49435-36f6-4373-ad5a-78cccd8001af] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003961458s
--- PASS: TestAddons/parallel/Headlamp (13.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-x7xsn" [3d659074-f5ea-40d9-9663-cf785f437adf] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0037295s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-480000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.83s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-480000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-480000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7e0bb0f4-063e-4282-8f2e-d8a21ddf5449] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7e0bb0f4-063e-4282-8f2e-d8a21ddf5449] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7e0bb0f4-063e-4282-8f2e-d8a21ddf5449] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004046875s
addons_test.go:992: (dbg) Run:  kubectl --context addons-480000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 ssh "cat /opt/local-path-provisioner/pvc-b01f18ab-62ad-4fcd-8cac-6629cc5fe305_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-480000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-480000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-arm64 -p addons-480000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.345449375s)
--- PASS: TestAddons/parallel/LocalPath (52.83s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9724b" [5a1f3a6a-8a3f-4ca9-9779-1a22cce48f51] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004416291s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-480000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-fk8rk" [0457d745-abc8-404e-8e1b-75ee879340de] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003859917s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (37.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:897: volcano-admission stabilized in 1.316583ms
addons_test.go:905: volcano-controller stabilized in 1.356375ms
addons_test.go:889: volcano-scheduler stabilized in 1.611875ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-4b9md" [b2a31146-8e27-4dd8-9eca-5afb8011b3a6] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003711375s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-xvrzs" [4800f164-0c86-4ed2-9aa8-de00a9952017] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.003838333s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-v9q8z" [c3707925-0850-4e7a-9855-91a522b174cc] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.00358825s
addons_test.go:924: (dbg) Run:  kubectl --context addons-480000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-480000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-480000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a47035f0-2270-409d-a4e9-2af8545d50ae] Pending
helpers_test.go:344: "test-job-nginx-0" [a47035f0-2270-409d-a4e9-2af8545d50ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a47035f0-2270-409d-a4e9-2af8545d50ae] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 13.003719291s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-arm64 -p addons-480000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-arm64 -p addons-480000 addons disable volcano --alsologtostderr -v=1: (9.6332325s)
--- PASS: TestAddons/parallel/Volcano (37.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-480000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-480000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-480000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-480000: (12.204866291s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-480000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-480000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-480000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.48s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.48s)

                                                
                                    
x
+
TestErrorSpam/setup (34.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-933000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-933000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 --driver=qemu2 : (34.614938666s)
--- PASS: TestErrorSpam/setup (34.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 stop: (12.202981166s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 stop: (26.056204292s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-933000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-933000 stop: (26.03377s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19312-1409/.minikube/files/etc/test/nested/copy/1911/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (90.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0721 16:33:09.360565    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:09.367491    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:09.379550    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:09.401606    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:09.443649    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:09.524882    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:09.685187    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:10.007271    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:10.649417    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:11.931536    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:14.493607    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:19.615619    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:33:29.857136    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-044000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m30.655388417s)
--- PASS: TestFunctional/serial/StartWithProxy (90.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --alsologtostderr -v=8
E0721 16:33:50.338811    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-044000 --alsologtostderr -v=8: (34.20237675s)
functional_test.go:659: soft start took 34.202737208s for "functional-044000" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-044000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:3.1: (3.735245791s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:3.3: (3.660313459s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:latest: (2.483391875s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1983479985/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache add minikube-local-cache-test:functional-044000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache delete minikube-local-cache-test:functional-044000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-044000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (64.205833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache reload
E0721 16:34:31.299892    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 cache reload: (2.151789958s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 kubectl -- --context functional-044000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-044000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-044000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.2207115s)
functional_test.go:757: restart took 38.22082225s for "functional-044000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-044000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2448673360/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-044000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-044000: exit status 115 (96.47825ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31775 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-044000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 config get cpus: exit status 14 (30.390417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 config get cpus: exit status 14 (31.665208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-044000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-044000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3002: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-044000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.756375ms)

                                                
                                                
-- stdout --
	* [functional-044000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:36:09.930332    2978 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:36:09.930479    2978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:09.930483    2978 out.go:304] Setting ErrFile to fd 2...
	I0721 16:36:09.930485    2978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:09.930622    2978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:36:09.931640    2978 out.go:298] Setting JSON to false
	I0721 16:36:09.949216    2978 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2132,"bootTime":1721602837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 16:36:09.949282    2978 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:36:09.955076    2978 out.go:177] * [functional-044000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0721 16:36:09.964023    2978 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 16:36:09.964077    2978 notify.go:220] Checking for updates...
	I0721 16:36:09.972007    2978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 16:36:09.976027    2978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 16:36:09.979064    2978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:36:09.981991    2978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 16:36:09.985013    2978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 16:36:09.988383    2978 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:36:09.988669    2978 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:36:09.993019    2978 out.go:177] * Using the qemu2 driver based on existing profile
	I0721 16:36:10.000024    2978 start.go:297] selected driver: qemu2
	I0721 16:36:10.000030    2978 start.go:901] validating driver "qemu2" against &{Name:functional-044000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:36:10.000082    2978 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 16:36:10.007052    2978 out.go:177] 
	W0721 16:36:10.011028    2978 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0721 16:36:10.013914    2978 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-044000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.751959ms)

                                                
                                                
-- stdout --
	* [functional-044000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:36:10.156331    2989 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:36:10.156435    2989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:10.156438    2989 out.go:304] Setting ErrFile to fd 2...
	I0721 16:36:10.156440    2989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:10.156575    2989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
	I0721 16:36:10.157999    2989 out.go:298] Setting JSON to false
	I0721 16:36:10.174830    2989 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2133,"bootTime":1721602837,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0721 16:36:10.174918    2989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:36:10.178120    2989 out.go:177] * [functional-044000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0721 16:36:10.185066    2989 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 16:36:10.185101    2989 notify.go:220] Checking for updates...
	I0721 16:36:10.192108    2989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	I0721 16:36:10.195061    2989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0721 16:36:10.198040    2989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:36:10.200947    2989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	I0721 16:36:10.204043    2989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 16:36:10.207338    2989 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:36:10.207586    2989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:36:10.212002    2989 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0721 16:36:10.219006    2989 start.go:297] selected driver: qemu2
	I0721 16:36:10.219013    2989 start.go:901] validating driver "qemu2" against &{Name:functional-044000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:36:10.219055    2989 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 16:36:10.225019    2989 out.go:177] 
	W0721 16:36:10.229017    2989 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0721 16:36:10.232922    2989 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [596f14b9-76e8-4b89-b4cd-23a44fe523f7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003957834s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-044000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-044000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2872c6f0-730e-49dc-b674-3cad4f10b4ea] Pending
helpers_test.go:344: "sp-pod" [2872c6f0-730e-49dc-b674-3cad4f10b4ea] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2872c6f0-730e-49dc-b674-3cad4f10b4ea] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00406525s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-044000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-044000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-044000 delete -f testdata/storage-provisioner/pod.yaml: (1.101304167s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ba822f3b-4e72-440f-8cac-0004dd7347c5] Pending
helpers_test.go:344: "sp-pod" [ba822f3b-4e72-440f-8cac-0004dd7347c5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ba822f3b-4e72-440f-8cac-0004dd7347c5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003658792s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-044000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -n functional-044000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cp functional-044000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1434402615/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -n functional-044000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -n functional-044000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1911/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/test/nested/copy/1911/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1911.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/ssl/certs/1911.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1911.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /usr/share/ca-certificates/1911.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/19112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/ssl/certs/19112.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/19112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /usr/share/ca-certificates/19112.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-044000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "sudo systemctl is-active crio": exit status 1 (65.806417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-044000
docker.io/kicbase/echo-server:functional-044000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image ls --format short --alsologtostderr:
I0721 16:36:11.935202    3017 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:11.935378    3017 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:11.935384    3017 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:11.935386    3017 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:11.935524    3017 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
I0721 16:36:11.936007    3017 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:11.936066    3017 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:11.936847    3017 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:11.936856    3017 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/functional-044000/id_rsa Username:docker}
I0721 16:36:11.958489    3017 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-044000 | 5d1c43773dc8f | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| docker.io/library/nginx                     | alpine            | 5461b18aaccf3 | 44.8MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | latest            | 443d199e8bfcc | 193MB  |
| docker.io/kicbase/echo-server               | functional-044000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image ls --format table --alsologtostderr:
I0721 16:36:16.755316    3029 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:16.755483    3029 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:16.755487    3029 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:16.755490    3029 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:16.755632    3029 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
I0721 16:36:16.756088    3029 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:16.756157    3029 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:16.756951    3029 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:16.756959    3029 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/functional-044000/id_rsa Username:docker}
I0721 16:36:16.779031    3029 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-044000"],"size":"4780000"},{"id":"1611cd07b61d57
dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"5d1c43773dc8fde55718cd3cbde3dc2f9c6805d23fefc1069a92811c926cb41b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-044000"],"size":"30"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f
2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image ls --format json --alsologtostderr:
I0721 16:36:16.687585    3027 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:16.687749    3027 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:16.687752    3027 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:16.687755    3027 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:16.687883    3027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
I0721 16:36:16.688283    3027 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:16.688343    3027 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:16.689114    3027 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:16.689123    3027 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/functional-044000/id_rsa Username:docker}
I0721 16:36:16.709784    3027 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image ls --format yaml --alsologtostderr:
- id: 5d1c43773dc8fde55718cd3cbde3dc2f9c6805d23fefc1069a92811c926cb41b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-044000
size: "30"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-044000
size: "4780000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image ls --format yaml --alsologtostderr:
I0721 16:36:12.000030    3019 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:12.000217    3019 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:12.000225    3019 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:12.000227    3019 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:12.000367    3019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
I0721 16:36:12.000788    3019 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:12.000859    3019 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:12.001723    3019 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:12.001735    3019 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/functional-044000/id_rsa Username:docker}
I0721 16:36:12.023524    3019 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh pgrep buildkitd: exit status 1 (53.6455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image build -t localhost/my-image:functional-044000 testdata/build --alsologtostderr
2024/07/21 16:36:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 image build -t localhost/my-image:functional-044000 testdata/build --alsologtostderr: (5.729411042s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image build -t localhost/my-image:functional-044000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in f9cf9decaeaa
---> Removed intermediate container f9cf9decaeaa
---> e233cee50b66
Step 3/3 : ADD content.txt /
---> f1b67fbd84b4
Successfully built f1b67fbd84b4
Successfully tagged localhost/my-image:functional-044000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image build -t localhost/my-image:functional-044000 testdata/build --alsologtostderr:
I0721 16:36:12.118526    3023 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:12.118759    3023 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:12.118763    3023 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:12.118765    3023 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:12.118912    3023 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1409/.minikube/bin
I0721 16:36:12.119364    3023 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:12.120060    3023 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:12.120926    3023 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:12.120935    3023 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1409/.minikube/machines/functional-044000/id_rsa Username:docker}
I0721 16:36:12.141446    3023 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2838532119.tar
I0721 16:36:12.141500    3023 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0721 16:36:12.144948    3023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2838532119.tar
I0721 16:36:12.146426    3023 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2838532119.tar: stat -c "%s %y" /var/lib/minikube/build/build.2838532119.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2838532119.tar': No such file or directory
I0721 16:36:12.146442    3023 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2838532119.tar --> /var/lib/minikube/build/build.2838532119.tar (3072 bytes)
I0721 16:36:12.154956    3023 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2838532119
I0721 16:36:12.159046    3023 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2838532119 -xf /var/lib/minikube/build/build.2838532119.tar
I0721 16:36:12.162354    3023 docker.go:360] Building image: /var/lib/minikube/build/build.2838532119
I0721 16:36:12.162399    3023 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-044000 /var/lib/minikube/build/build.2838532119
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0721 16:36:17.806947    3023 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-044000 /var/lib/minikube/build/build.2838532119: (5.644693834s)
I0721 16:36:17.807008    3023 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2838532119
I0721 16:36:17.810584    3023 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2838532119.tar
I0721 16:36:17.813809    3023 build_images.go:217] Built localhost/my-image:functional-044000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2838532119.tar
I0721 16:36:17.813824    3023 build_images.go:133] succeeded building to: functional-044000
I0721 16:36:17.813828    3023 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.735344667s)
functional_test.go:346: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-044000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-044000 docker-env) && out/minikube-darwin-arm64 status -p functional-044000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-044000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-044000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-044000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-gnr4k" [8cc249af-62a6-47b2-818d-cd9d2aeb39ea] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-gnr4k" [8cc249af-62a6-47b2-818d-cd9d2aeb39ea] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.003602959s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image load --daemon kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image load --daemon kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-044000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image load --daemon kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image save kicbase/echo-server:functional-044000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image rm kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi kicbase/echo-server:functional-044000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image save --daemon kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect kicbase/echo-server:functional-044000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2838: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c022717d-982b-4e6e-bfe0-ffd15a9770a6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c022717d-982b-4e6e-bfe0-ffd15a9770a6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.003655541s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service list -o json
functional_test.go:1490: Took "76.975709ms" to run "out/minikube-darwin-arm64 -p functional-044000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:31151
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:31151
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-044000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.207.63 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "79.16225ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.785542ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "79.443875ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.358208ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2064888309/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721604958591528000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2064888309/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721604958591528000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2064888309/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721604958591528000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2064888309/001/test-1721604958591528000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (56.252875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 21 23:35 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 21 23:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 21 23:35 test-1721604958591528000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh cat /mount-9p/test-1721604958591528000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-044000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f84d2293-2028-40ed-8c99-aeee4d18f14e] Pending
helpers_test.go:344: "busybox-mount" [f84d2293-2028-40ed-8c99-aeee4d18f14e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f84d2293-2028-40ed-8c99-aeee4d18f14e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f84d2293-2028-40ed-8c99-aeee4d18f14e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003784833s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-044000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2064888309/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port672247181/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (54.476417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (52.361666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port672247181/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "sudo umount -f /mount-9p": exit status 1 (54.686084ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-044000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port672247181/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount1: exit status 1 (65.826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-044000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup337015648/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.65s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-044000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-044000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-044000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (368.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-736000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0721 16:38:09.352163    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:38:37.057548    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
E0721 16:40:19.047972    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:19.053571    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:19.065130    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:19.087201    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:19.129284    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:19.211182    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:19.372446    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:19.694523    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:20.336656    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:21.618761    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:24.180903    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:29.302950    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:40:39.544875    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:41:00.026440    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:41:40.987537    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-736000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (6m8.386344375s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (368.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-736000 -- rollout status deployment/busybox: (8.430594s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-4246q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-5tgdx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-prmsf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-4246q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-5tgdx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-prmsf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-4246q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-5tgdx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-prmsf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-4246q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-4246q -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-5tgdx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-5tgdx -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-prmsf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-736000 -- exec busybox-fc5497c4f-prmsf -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (89.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-736000 -v=7 --alsologtostderr
E0721 16:43:02.905671    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/functional-044000/client.crt: no such file or directory
E0721 16:43:09.342976    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-736000 -v=7 --alsologtostderr: (1m28.782307875s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (89.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-736000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.74246325s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp testdata/cp-test.txt ha-736000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1237583822/001/cp-test_ha-736000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000_ha-736000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000_ha-736000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000_ha-736000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp testdata/cp-test.txt ha-736000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1237583822/001/cp-test_ha-736000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m02_ha-736000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000-m02_ha-736000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000-m02_ha-736000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp testdata/cp-test.txt ha-736000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1237583822/001/cp-test_ha-736000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m03_ha-736000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000-m03_ha-736000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000-m03_ha-736000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp testdata/cp-test.txt ha-736000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1237583822/001/cp-test_ha-736000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m04_ha-736000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000-m04_ha-736000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000-m04_ha-736000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0721 16:53:09.325790    1911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1409/.minikube/profiles/addons-480000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.987713083s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-930000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-930000 --output=json --user=testUser: (3.801538208s)
--- PASS: TestJSONOutput/stop/Command (3.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-522000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-522000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.735167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6bb12af0-c710-461d-9a3f-ebc273224a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-522000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9670580f-3505-40d2-bb5f-8905d305f7d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"f92c23a6-f61d-4a94-b705-0872d1910f22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig"}}
	{"specversion":"1.0","id":"c195da34-ff9a-4d20-9c80-14bcaecd88cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2c09150d-3b0e-45e2-a07e-316979824211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7ab2d91-c505-488e-ace1-8ad281efe50e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube"}}
	{"specversion":"1.0","id":"73cbd6b1-0f5d-4797-873e-a452f94eae44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"47e7db25-5292-49fa-a904-e1e905b71a79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-522000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-522000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-731000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-731000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.873ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-731000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-731000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.402583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-731000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-731000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.616991083s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.616918916s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-731000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-731000: (4.083252625s)
--- PASS: TestNoKubernetes/serial/Stop (4.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-731000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-731000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.248458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-731000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-731000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-930000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-749000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-749000 --alsologtostderr -v=3: (3.911716584s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (52.781417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-749000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-980000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-980000 --alsologtostderr -v=3: (3.731895709s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-980000 -n no-preload-980000: exit status 7 (56.399625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-980000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-540000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-540000 --alsologtostderr -v=3: (3.895348459s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-540000 -n embed-certs-540000: exit status 7 (56.728292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-540000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-170000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-170000 --alsologtostderr -v=3: (1.765506542s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-170000 -n default-k8s-diff-port-170000: exit status 7 (57.406584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-170000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-208000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-208000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-208000 --alsologtostderr -v=3: (3.846955292s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-208000 -n newest-cni-208000: exit status 7 (61.020042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-208000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-396000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-396000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-396000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-396000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-396000"

                                                
                                                
----------------------- debugLogs end: cilium-396000 [took: 2.172719166s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-396000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-396000
--- SKIP: TestNetworkPlugins/group/cilium (2.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-181000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-181000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard